00:00:00.000 Started by upstream project "autotest-per-patch" build number 120906 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.036 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.037 The recommended git tool is: git 00:00:00.037 using credential 00000000-0000-0000-0000-000000000002 00:00:00.039 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.057 Fetching changes from the remote Git repository 00:00:00.059 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.089 Using shallow fetch with depth 1 00:00:00.089 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.089 > git --version # timeout=10 00:00:00.131 > git --version # 'git version 2.39.2' 00:00:00.131 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.132 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.132 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.169 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.181 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.195 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:00:02.195 > git config core.sparsecheckout # timeout=10 00:00:02.207 > git read-tree -mu HEAD # timeout=10 00:00:02.224 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=5 00:00:02.244 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:00:02.244 > git rev-list --no-walk 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:00:02.490 [Pipeline] Start of Pipeline 00:00:02.503 [Pipeline] library 00:00:02.504 Loading library shm_lib@master 00:00:02.504 Library shm_lib@master is cached. Copying from home. 00:00:02.520 [Pipeline] node 00:00:02.528 Running on VM-host-SM4 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:02.530 [Pipeline] { 00:00:02.540 [Pipeline] catchError 00:00:02.541 [Pipeline] { 00:00:02.555 [Pipeline] wrap 00:00:02.562 [Pipeline] { 00:00:02.570 [Pipeline] stage 00:00:02.571 [Pipeline] { (Prologue) 00:00:02.585 [Pipeline] echo 00:00:02.586 Node: VM-host-SM4 00:00:02.591 [Pipeline] cleanWs 00:00:02.599 [WS-CLEANUP] Deleting project workspace... 00:00:02.599 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.609 [WS-CLEANUP] done 00:00:02.775 [Pipeline] setCustomBuildProperty 00:00:02.839 [Pipeline] nodesByLabel 00:00:02.840 Could not find any nodes with 'sorcerer' label 00:00:02.846 [Pipeline] retry 00:00:02.848 [Pipeline] { 00:00:02.873 [Pipeline] checkout 00:00:02.878 The recommended git tool is: git 00:00:02.887 using credential 00000000-0000-0000-0000-000000000002 00:00:02.890 Cloning the remote Git repository 00:00:02.892 Honoring refspec on initial clone 00:00:02.891 Cloning repository https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.891 > git init /var/jenkins/workspace/ubuntu22-vg-autotest/jbp # timeout=10 00:00:02.907 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.907 > git --version # timeout=10 00:00:02.912 > git --version # 'git version 2.25.1' 00:00:02.912 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.913 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.913 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=10 00:00:09.359 Avoid second fetch 00:00:09.375 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:00:09.481 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:00:09.487 [Pipeline] } 00:00:09.506 [Pipeline] // retry 00:00:09.519 [Pipeline] nodesByLabel 00:00:09.520 Could not find any nodes with 'sorcerer' label 00:00:09.523 [Pipeline] retry 00:00:09.524 [Pipeline] { 00:00:09.542 [Pipeline] checkout 00:00:09.548 The recommended git tool is: NONE 00:00:09.557 using credential 00000000-0000-0000-0000-000000000002 00:00:09.561 Cloning the remote Git repository 00:00:09.563 Honoring refspec on initial clone 00:00:09.338 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:09.344 > git config --add remote.origin.fetch refs/heads/master # timeout=10 00:00:09.357 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.366 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.373 > git config core.sparsecheckout # timeout=10 00:00:09.377 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:00:09.562 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:09.562 > git init /var/jenkins/workspace/ubuntu22-vg-autotest/spdk # timeout=10 00:00:09.573 Using reference repository: /var/ci_repos/spdk_multi 00:00:09.573 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:09.573 > git --version # timeout=10 00:00:09.577 > git --version # 'git version 2.25.1' 00:00:09.577 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:09.578 Setting http proxy: proxy-dmz.intel.com:911 00:00:09.578 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/47/22647/6 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:28.708 Avoid second fetch 00:00:28.725 Checking out Revision 9fa7361dbc8c5232f5bc34d3ba601269c5e097e6 (FETCH_HEAD) 00:00:28.684 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:00:28.688 > git config --add remote.origin.fetch refs/changes/47/22647/6 # timeout=10 00:00:28.694 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:28.707 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:28.715 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:28.724 > git config core.sparsecheckout # timeout=10 00:00:28.727 > git checkout -f 9fa7361dbc8c5232f5bc34d3ba601269c5e097e6 # timeout=10 00:00:29.161 Commit message: "trace: rename trace_event's poller_id to owner_id" 00:00:29.171 First time build. Skipping changelog. 00:00:29.160 > git rev-list --no-walk 36faa8c312bf9059b86e0f503d7fd6b43c1498e6 # timeout=10 00:00:29.172 > git remote # timeout=10 00:00:29.177 > git submodule init # timeout=10 00:00:29.249 > git submodule sync # timeout=10 00:00:29.314 > git config --get remote.origin.url # timeout=10 00:00:29.322 > git submodule init # timeout=10 00:00:29.383 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:00:29.387 > git config --get submodule.dpdk.url # timeout=10 00:00:29.392 > git remote # timeout=10 00:00:29.398 > git config --get remote.origin.url # timeout=10 00:00:29.403 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:00:29.406 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:00:29.410 > git remote # timeout=10 00:00:29.413 > git config --get remote.origin.url # timeout=10 00:00:29.417 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:00:29.421 > git config --get submodule.isa-l.url # timeout=10 00:00:29.424 > git remote # timeout=10 00:00:29.430 > git config --get remote.origin.url # timeout=10 00:00:29.434 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:00:29.437 > git config --get submodule.ocf.url # timeout=10 00:00:29.442 > git remote # timeout=10 00:00:29.447 > git config --get remote.origin.url # timeout=10 00:00:29.452 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:00:29.455 > git config --get submodule.libvfio-user.url # timeout=10 00:00:29.460 > git remote # timeout=10 00:00:29.465 > git config --get remote.origin.url # timeout=10 00:00:29.469 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:00:29.473 > git config --get submodule.xnvme.url # timeout=10 00:00:29.477 > git remote # timeout=10 00:00:29.482 > git config --get remote.origin.url # timeout=10 00:00:29.488 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:00:29.491 > git config --get submodule.isa-l-crypto.url # timeout=10 00:00:29.495 > git remote # timeout=10 00:00:29.501 > git config --get remote.origin.url # timeout=10 00:00:29.506 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:00:29.515 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:29.516 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:29.516 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:29.516 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:29.516 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:29.516 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:29.516 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:29.516 Setting http proxy: proxy-dmz.intel.com:911 00:00:29.516 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:00:29.516 Setting http proxy: proxy-dmz.intel.com:911 00:00:29.516 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:00:29.517 Setting http proxy: proxy-dmz.intel.com:911 00:00:29.517 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:00:29.517 Setting http proxy: proxy-dmz.intel.com:911 00:00:29.517 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:00:29.517 Setting http proxy: proxy-dmz.intel.com:911 00:00:29.517 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:00:29.517 Setting http proxy: proxy-dmz.intel.com:911 00:00:29.517 Setting http proxy: proxy-dmz.intel.com:911 00:00:29.517 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:00:29.517 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:00:55.764 [Pipeline] dir 00:00:55.765 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:00:55.767 [Pipeline] { 00:00:55.786 [Pipeline] sh 00:00:56.069 ++ nproc 00:00:56.069 + threads=88 00:00:56.069 + git repack -a -d --threads=88 00:01:01.340 + git submodule foreach git repack -a -d --threads=88 00:01:01.340 Entering 'dpdk' 00:01:06.625 Entering 'intel-ipsec-mb' 00:01:06.883 Entering 'isa-l' 00:01:07.141 Entering 'isa-l-crypto' 00:01:07.141 Entering 'libvfio-user' 00:01:07.398 Entering 'ocf' 00:01:07.965 Entering 'xnvme' 00:01:08.223 + find .git -type f -name alternates -print -delete 00:01:08.223 .git/objects/info/alternates 00:01:08.223 .git/modules/dpdk/objects/info/alternates 00:01:08.223 .git/modules/ocf/objects/info/alternates 00:01:08.223 .git/modules/isa-l/objects/info/alternates 00:01:08.223 .git/modules/xnvme/objects/info/alternates 00:01:08.223 .git/modules/libvfio-user/objects/info/alternates 00:01:08.223 .git/modules/isa-l-crypto/objects/info/alternates 00:01:08.223 .git/modules/intel-ipsec-mb/objects/info/alternates 00:01:08.233 [Pipeline] } 00:01:08.253 [Pipeline] // dir 00:01:08.259 [Pipeline] } 00:01:08.274 [Pipeline] // retry 00:01:08.282 [Pipeline] sh 00:01:08.562 + git -C spdk log --oneline -n5 00:01:08.562 9fa7361db trace: rename trace_event's poller_id to owner_id 00:01:08.562 b5fc85e02 trace: add concept of "owner" to trace files 00:01:08.562 b5f3c57c7 trace: rename "per_lcore_history" to just "data" 00:01:08.562 d47c35e4c trace: add trace_flags_fini() 00:01:08.562 319e2398a bdev/nvme: refactor "current" calculation for get_io_path RPC 00:01:08.580 [Pipeline] writeFile 00:01:08.595 [Pipeline] sh 00:01:08.872 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:08.885 [Pipeline] sh 00:01:09.168 + cat autorun-spdk.conf 00:01:09.168 SPDK_TEST_UNITTEST=1 00:01:09.168 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.168 SPDK_TEST_NVME=1 00:01:09.168 SPDK_TEST_BLOCKDEV=1 00:01:09.168 SPDK_RUN_ASAN=1 00:01:09.168 SPDK_RUN_UBSAN=1 00:01:09.168 SPDK_TEST_RAID5=1 00:01:09.168 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.174 RUN_NIGHTLY=0 00:01:09.176 [Pipeline] } 00:01:09.193 [Pipeline] // stage 00:01:09.210 [Pipeline] stage 00:01:09.212 [Pipeline] { (Run VM) 00:01:09.229 [Pipeline] sh 00:01:09.509 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:09.509 + echo 'Start stage prepare_nvme.sh' 00:01:09.509 Start stage prepare_nvme.sh 00:01:09.509 + [[ -n 9 ]] 00:01:09.509 + disk_prefix=ex9 00:01:09.509 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:01:09.509 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:01:09.509 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:01:09.509 ++ SPDK_TEST_UNITTEST=1 00:01:09.509 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.509 ++ SPDK_TEST_NVME=1 00:01:09.509 ++ SPDK_TEST_BLOCKDEV=1 00:01:09.509 ++ SPDK_RUN_ASAN=1 00:01:09.509 ++ SPDK_RUN_UBSAN=1 00:01:09.509 ++ SPDK_TEST_RAID5=1 00:01:09.509 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.509 ++ RUN_NIGHTLY=0 00:01:09.509 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:09.509 + nvme_files=() 00:01:09.509 + declare -A nvme_files 00:01:09.509 + backend_dir=/var/lib/libvirt/images/backends 00:01:09.509 + nvme_files['nvme.img']=5G 00:01:09.509 + nvme_files['nvme-cmb.img']=5G 00:01:09.509 + nvme_files['nvme-multi0.img']=4G 00:01:09.509 + nvme_files['nvme-multi1.img']=4G 00:01:09.509 + nvme_files['nvme-multi2.img']=4G 00:01:09.509 + nvme_files['nvme-openstack.img']=8G 00:01:09.509 + nvme_files['nvme-zns.img']=5G 00:01:09.509 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:09.509 + (( SPDK_TEST_FTL == 1 )) 00:01:09.509 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:09.509 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:09.509 + for nvme in "${!nvme_files[@]}" 00:01:09.509 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:01:09.509 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.509 + for nvme in "${!nvme_files[@]}" 00:01:09.509 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:01:09.768 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.768 + for nvme in "${!nvme_files[@]}" 00:01:09.768 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:01:09.768 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:09.768 + for nvme in "${!nvme_files[@]}" 00:01:09.768 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:01:09.768 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.768 + for nvme in "${!nvme_files[@]}" 00:01:09.768 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:01:10.027 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.027 + for nvme in "${!nvme_files[@]}" 00:01:10.027 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:01:10.027 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.027 + for nvme in "${!nvme_files[@]}" 00:01:10.027 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:01:10.285 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.285 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:01:10.285 + echo 'End stage prepare_nvme.sh' 00:01:10.285 End stage prepare_nvme.sh 00:01:10.296 [Pipeline] sh 00:01:10.578 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:10.578 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme.img -H -a -v -f ubuntu2204 00:01:10.578 00:01:10.578 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:01:10.578 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:01:10.578 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:01:10.578 HELP=0 00:01:10.578 DRY_RUN=0 00:01:10.578 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme.img, 00:01:10.578 NVME_DISKS_TYPE=nvme, 00:01:10.578 NVME_AUTO_CREATE=0 00:01:10.578 NVME_DISKS_NAMESPACES=, 00:01:10.578 NVME_CMB=, 00:01:10.578 NVME_PMR=, 00:01:10.578 NVME_ZNS=, 00:01:10.578 NVME_MS=, 00:01:10.578 NVME_FDP=, 00:01:10.578 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:10.578 SPDK_VAGRANT_VMCPU=10 00:01:10.578 SPDK_VAGRANT_VMRAM=12288 00:01:10.578 SPDK_VAGRANT_PROVIDER=libvirt 00:01:10.578 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:10.578 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:10.578 SPDK_OPENSTACK_NETWORK=0 00:01:10.578 VAGRANT_PACKAGE_BOX=0 00:01:10.578 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:10.578 FORCE_DISTRO=true 00:01:10.578 VAGRANT_BOX_VERSION= 00:01:10.578 EXTRA_VAGRANTFILES= 00:01:10.578 NIC_MODEL=e1000 00:01:10.578 00:01:10.836 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:01:10.836 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:15.049 Bringing machine 'default' up with 'libvirt' provider... 00:01:15.308 ==> default: Creating image (snapshot of base box volume). 00:01:15.308 ==> default: Creating domain with the following settings... 00:01:15.308 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1713917708_8e732c673c07c99dd604 00:01:15.308 ==> default: -- Domain type: kvm 00:01:15.308 ==> default: -- Cpus: 10 00:01:15.308 ==> default: -- Feature: acpi 00:01:15.308 ==> default: -- Feature: apic 00:01:15.308 ==> default: -- Feature: pae 00:01:15.308 ==> default: -- Memory: 12288M 00:01:15.308 ==> default: -- Memory Backing: hugepages: 00:01:15.308 ==> default: -- Management MAC: 00:01:15.308 ==> default: -- Loader: 00:01:15.308 ==> default: -- Nvram: 00:01:15.308 ==> default: -- Base box: spdk/ubuntu2204 00:01:15.308 ==> default: -- Storage pool: default 00:01:15.308 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1713917708_8e732c673c07c99dd604.img (20G) 00:01:15.308 ==> default: -- Volume Cache: default 00:01:15.308 ==> default: -- Kernel: 00:01:15.308 ==> default: -- Initrd: 00:01:15.308 ==> default: -- Graphics Type: vnc 00:01:15.308 ==> default: -- Graphics Port: -1 00:01:15.308 ==> default: -- Graphics IP: 127.0.0.1 00:01:15.308 ==> default: -- Graphics Password: Not defined 00:01:15.308 ==> default: -- Video Type: cirrus 00:01:15.308 ==> default: -- Video VRAM: 9216 00:01:15.308 ==> default: -- Sound Type: 00:01:15.308 ==> default: -- Keymap: en-us 00:01:15.308 ==> default: -- TPM Path: 00:01:15.308 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:15.308 ==> default: -- Command line args: 00:01:15.308 ==> default: -> value=-device, 00:01:15.308 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:15.308 ==> default: -> value=-drive, 00:01:15.308 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-0-drive0, 00:01:15.308 ==> default: -> value=-device, 00:01:15.308 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.566 ==> default: Creating shared folders metadata... 00:01:15.566 ==> default: Starting domain. 00:01:16.944 ==> default: Waiting for domain to get an IP address... 00:01:29.150 ==> default: Waiting for SSH to become available... 00:01:29.150 ==> default: Configuring and enabling network interfaces... 00:01:34.420 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:40.977 ==> default: Mounting SSHFS shared folder... 00:01:41.545 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:01:41.545 ==> default: Checking Mount.. 00:01:42.515 ==> default: Folder Successfully Mounted! 00:01:42.515 ==> default: Running provisioner: file... 00:01:42.774 default: ~/.gitconfig => .gitconfig 00:01:43.032 00:01:43.032 SUCCESS! 00:01:43.032 00:01:43.032 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:01:43.032 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:43.032 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:01:43.032 00:01:43.043 [Pipeline] } 00:01:43.064 [Pipeline] // stage 00:01:43.074 [Pipeline] dir 00:01:43.075 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:01:43.077 [Pipeline] { 00:01:43.095 [Pipeline] catchError 00:01:43.097 [Pipeline] { 00:01:43.115 [Pipeline] sh 00:01:43.394 + vagrant ssh-config --host vagrant 00:01:43.394 + sed -ne /^Host/,$p 00:01:43.394 + tee ssh_conf 00:01:47.581 Host vagrant 00:01:47.581 HostName 192.168.121.238 00:01:47.581 User vagrant 00:01:47.581 Port 22 00:01:47.581 UserKnownHostsFile /dev/null 00:01:47.581 StrictHostKeyChecking no 00:01:47.581 PasswordAuthentication no 00:01:47.581 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:01:47.581 IdentitiesOnly yes 00:01:47.581 LogLevel FATAL 00:01:47.581 ForwardAgent yes 00:01:47.581 ForwardX11 yes 00:01:47.581 00:01:47.595 [Pipeline] withEnv 00:01:47.597 [Pipeline] { 00:01:47.613 [Pipeline] sh 00:01:47.891 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:47.891 source /etc/os-release 00:01:47.891 [[ -e /image.version ]] && img=$(< /image.version) 00:01:47.891 # Minimal, systemd-like check. 00:01:47.891 if [[ -e /.dockerenv ]]; then 00:01:47.891 # Clear garbage from the node's name: 00:01:47.891 # agt-er_autotest_547-896 -> autotest_547-896 00:01:47.891 # $HOSTNAME is the actual container id 00:01:47.891 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:47.891 if mountpoint -q /etc/hostname; then 00:01:47.891 # We can assume this is a mount from a host where container is running, 00:01:47.891 # so fetch its hostname to easily identify the target swarm worker. 00:01:47.891 container="$(< /etc/hostname) ($agent)" 00:01:47.891 else 00:01:47.891 # Fallback 00:01:47.891 container=$agent 00:01:47.891 fi 00:01:47.891 fi 00:01:47.891 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:47.891 00:01:48.163 [Pipeline] } 00:01:48.183 [Pipeline] // withEnv 00:01:48.193 [Pipeline] setCustomBuildProperty 00:01:48.208 [Pipeline] stage 00:01:48.210 [Pipeline] { (Tests) 00:01:48.229 [Pipeline] sh 00:01:48.510 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:48.784 [Pipeline] timeout 00:01:48.784 Timeout set to expire in 1 hr 0 min 00:01:48.786 [Pipeline] { 00:01:48.801 [Pipeline] sh 00:01:49.081 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:49.649 HEAD is now at 9fa7361db trace: rename trace_event's poller_id to owner_id 00:01:49.663 [Pipeline] sh 00:01:49.942 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:50.215 [Pipeline] sh 00:01:50.495 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:50.769 [Pipeline] sh 00:01:51.185 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:01:51.185 ++ readlink -f spdk_repo 00:01:51.185 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:51.185 + [[ -n /home/vagrant/spdk_repo ]] 00:01:51.185 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:51.185 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:51.185 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:51.185 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:51.185 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:51.185 + cd /home/vagrant/spdk_repo 00:01:51.185 + source /etc/os-release 00:01:51.185 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:01:51.185 ++ NAME=Ubuntu 00:01:51.185 ++ VERSION_ID=22.04 00:01:51.185 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:01:51.185 ++ VERSION_CODENAME=jammy 00:01:51.185 ++ ID=ubuntu 00:01:51.185 ++ ID_LIKE=debian 00:01:51.185 ++ HOME_URL=https://www.ubuntu.com/ 00:01:51.185 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:51.185 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:51.185 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:51.185 ++ UBUNTU_CODENAME=jammy 00:01:51.185 + uname -a 00:01:51.185 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:51.185 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:51.753 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:01:51.753 Hugepages 00:01:51.753 node hugesize free / total 00:01:51.753 node0 1048576kB 0 / 0 00:01:51.753 node0 2048kB 0 / 0 00:01:51.753 00:01:51.753 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:51.753 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:51.753 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:51.753 + rm -f /tmp/spdk-ld-path 00:01:51.753 + source autorun-spdk.conf 00:01:51.753 ++ SPDK_TEST_UNITTEST=1 00:01:51.753 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.753 ++ SPDK_TEST_NVME=1 00:01:51.753 ++ SPDK_TEST_BLOCKDEV=1 00:01:51.753 ++ SPDK_RUN_ASAN=1 00:01:51.753 ++ SPDK_RUN_UBSAN=1 00:01:51.753 ++ SPDK_TEST_RAID5=1 00:01:51.753 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:51.753 ++ RUN_NIGHTLY=0 00:01:51.753 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:51.753 + [[ -n '' ]] 00:01:51.753 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:51.753 + for M in /var/spdk/build-*-manifest.txt 00:01:51.753 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:51.753 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.753 + for M in /var/spdk/build-*-manifest.txt 00:01:51.753 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:51.753 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.753 ++ uname 00:01:51.753 + [[ Linux == \L\i\n\u\x ]] 00:01:51.753 + sudo dmesg -T 00:01:51.753 + sudo dmesg --clear 00:01:51.753 + dmesg_pid=2094 00:01:51.753 + sudo dmesg -Tw 00:01:51.753 + [[ Ubuntu == FreeBSD ]] 00:01:51.753 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.753 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.753 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:51.753 + [[ -x /usr/src/fio-static/fio ]] 00:01:51.753 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:51.753 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:51.753 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:51.753 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:51.753 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:51.753 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:51.753 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:51.753 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:51.753 Test configuration: 00:01:51.753 SPDK_TEST_UNITTEST=1 00:01:51.753 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.753 SPDK_TEST_NVME=1 00:01:51.753 SPDK_TEST_BLOCKDEV=1 00:01:51.753 SPDK_RUN_ASAN=1 00:01:51.753 SPDK_RUN_UBSAN=1 00:01:51.753 SPDK_TEST_RAID5=1 00:01:51.753 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.011 RUN_NIGHTLY=0 00:15:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:52.011 00:15:44 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:52.011 00:15:44 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:52.011 00:15:44 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:52.011 00:15:44 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:52.011 00:15:44 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:52.011 00:15:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:52.011 00:15:44 -- paths/export.sh@5 -- $ export PATH 00:01:52.011 00:15:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:52.011 00:15:44 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:52.011 00:15:44 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:52.011 00:15:44 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713917744.XXXXXX 00:01:52.011 00:15:44 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713917744.0XRQN9 00:01:52.011 00:15:44 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:52.011 00:15:44 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:52.011 00:15:44 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:52.011 00:15:44 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:52.011 00:15:44 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:52.011 00:15:44 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:52.011 00:15:44 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:52.011 00:15:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.011 00:15:44 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:52.011 00:15:44 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:52.011 00:15:44 -- pm/common@17 -- $ local monitor 00:01:52.011 00:15:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.011 00:15:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2130 00:01:52.011 00:15:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.011 00:15:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2132 00:01:52.011 00:15:44 -- pm/common@21 -- $ date +%s 00:01:52.011 00:15:44 -- pm/common@26 -- $ sleep 1 00:01:52.011 00:15:44 -- pm/common@21 -- $ date +%s 00:01:52.011 00:15:44 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713917744 00:01:52.011 00:15:44 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713917744 00:01:52.012 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713917744_collect-vmstat.pm.log 00:01:52.012 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713917744_collect-cpu-load.pm.log 00:01:52.948 00:15:45 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:52.948 00:15:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:52.948 00:15:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:52.948 00:15:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:52.948 00:15:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:52.948 Wed Apr 24 00:15:45 UTC 2024 00:01:52.948 00:15:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:52.948 v24.05-pre-427-g9fa7361db 00:01:52.948 00:15:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:52.948 00:15:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:52.948 00:15:45 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:52.948 00:15:45 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:52.948 00:15:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.948 ************************************ 00:01:52.948 START TEST asan 00:01:52.948 ************************************ 00:01:52.948 using asan 00:01:52.948 00:15:45 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:01:52.948 00:01:52.948 real 0m0.000s 00:01:52.948 user 0m0.000s 00:01:52.948 sys 0m0.000s 00:01:52.948 00:15:45 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:52.948 ************************************ 00:01:52.948 END TEST asan 00:01:52.948 00:15:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.948 ************************************ 00:01:52.948 00:15:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:52.948 00:15:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:52.948 00:15:45 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:52.948 00:15:45 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:52.948 00:15:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.207 ************************************ 00:01:53.207 START TEST ubsan 00:01:53.207 ************************************ 00:01:53.207 using ubsan 00:01:53.207 00:15:45 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:53.207 00:01:53.207 real 0m0.000s 00:01:53.207 user 0m0.000s 00:01:53.207 sys 0m0.000s 00:01:53.207 00:15:45 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:53.207 00:15:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.207 ************************************ 00:01:53.207 END TEST ubsan 00:01:53.207 ************************************ 00:01:53.207 00:15:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:53.207 00:15:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:53.207 00:15:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:53.207 00:15:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:53.207 00:15:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:53.207 00:15:45 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:53.207 00:15:45 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:53.207 00:15:45 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:01:53.207 00:15:45 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:53.207 00:15:45 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:53.207 00:15:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.207 ************************************ 00:01:53.207 START TEST unittest_build 00:01:53.207 ************************************ 00:01:53.207 00:15:45 -- common/autotest_common.sh@1111 -- $ _unittest_build 00:01:53.207 00:15:45 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:53.207 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:53.207 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:53.774 Using 'verbs' RDMA provider 00:02:09.585 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:24.494 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:24.753 Creating mk/config.mk...done. 00:02:24.753 Creating mk/cc.flags.mk...done. 00:02:24.753 Type 'make' to build. 00:02:24.753 00:16:18 -- common/autobuild_common.sh@403 -- $ make -j10 00:02:25.012 make[1]: Nothing to be done for 'all'. 00:02:26.384 help2man: can't get `--help' info from ./programs/igzip 00:02:26.384 Try `--no-discard-stderr' if option outputs to stderr 00:02:26.384 make[3]: [Makefile:4943: programs/igzip.1] Error 127 (ignored) 00:02:41.258 The Meson build system 00:02:41.258 Version: 1.4.0 00:02:41.258 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:41.258 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:41.258 Build type: native build 00:02:41.258 Program cat found: YES (/usr/bin/cat) 00:02:41.258 Project name: DPDK 00:02:41.258 Project version: 23.11.0 00:02:41.258 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:41.258 C linker for the host machine: cc ld.bfd 2.38 00:02:41.258 Host machine cpu family: x86_64 00:02:41.258 Host machine cpu: x86_64 00:02:41.258 Message: ## Building in Developer Mode ## 00:02:41.258 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:41.258 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:41.258 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:41.258 Program python3 found: YES (/usr/bin/python3) 00:02:41.258 Program cat found: YES (/usr/bin/cat) 00:02:41.258 Compiler for C supports arguments -march=native: YES 00:02:41.258 Checking for size of "void *" : 8 00:02:41.258 Checking for size of "void *" : 8 (cached) 00:02:41.258 Library m found: YES 00:02:41.258 Library numa found: YES 00:02:41.258 Has header "numaif.h" : YES 00:02:41.258 Library fdt found: NO 00:02:41.258 Library execinfo found: NO 00:02:41.258 Has header "execinfo.h" : YES 00:02:41.258 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:41.258 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:41.258 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:41.258 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:41.258 Run-time dependency openssl found: YES 3.0.2 00:02:41.258 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:41.258 Library pcap found: NO 00:02:41.258 Compiler for C supports arguments -Wcast-qual: YES 00:02:41.258 Compiler for C supports arguments -Wdeprecated: YES 00:02:41.258 Compiler for C supports arguments -Wformat: YES 00:02:41.258 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:41.258 Compiler for C supports arguments -Wformat-security: YES 00:02:41.258 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:41.258 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:41.258 Compiler for C supports arguments -Wnested-externs: YES 00:02:41.258 Compiler for C supports arguments -Wold-style-definition: YES 00:02:41.258 Compiler for C supports arguments -Wpointer-arith: YES 00:02:41.258 Compiler for C supports arguments -Wsign-compare: YES 00:02:41.258 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:41.258 Compiler for C supports arguments -Wundef: YES 00:02:41.258 Compiler for C supports arguments -Wwrite-strings: YES 00:02:41.258 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:41.258 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:41.258 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:41.258 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:41.258 Program objdump found: YES (/usr/bin/objdump) 00:02:41.258 Compiler for C supports arguments -mavx512f: YES 00:02:41.258 Checking if "AVX512 checking" compiles: YES 00:02:41.258 Fetching value of define "__SSE4_2__" : 1 00:02:41.258 Fetching value of define "__AES__" : 1 00:02:41.258 Fetching value of define "__AVX__" : 1 00:02:41.258 Fetching value of define "__AVX2__" : 1 00:02:41.258 Fetching value of define "__AVX512BW__" : 1 00:02:41.258 Fetching value of define "__AVX512CD__" : 1 00:02:41.258 Fetching value of define "__AVX512DQ__" : 1 00:02:41.258 Fetching value of define "__AVX512F__" : 1 00:02:41.258 Fetching value of define "__AVX512VL__" : 1 00:02:41.258 Fetching value of define "__PCLMUL__" : 1 00:02:41.258 Fetching value of define "__RDRND__" : 1 00:02:41.258 Fetching value of define "__RDSEED__" : 1 00:02:41.258 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:41.258 Fetching value of define "__znver1__" : (undefined) 00:02:41.258 Fetching value of define "__znver2__" : (undefined) 00:02:41.258 Fetching value of define "__znver3__" : (undefined) 00:02:41.258 Fetching value of define "__znver4__" : (undefined) 00:02:41.258 Library asan found: YES 00:02:41.258 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:41.258 Message: lib/log: Defining dependency "log" 00:02:41.258 Message: lib/kvargs: Defining dependency "kvargs" 00:02:41.258 Message: lib/telemetry: Defining dependency "telemetry" 00:02:41.258 Library rt found: YES 00:02:41.258 Checking for function "getentropy" : NO 00:02:41.258 Message: lib/eal: Defining dependency "eal" 00:02:41.258 Message: lib/ring: Defining dependency "ring" 00:02:41.258 Message: lib/rcu: Defining dependency "rcu" 00:02:41.258 Message: lib/mempool: Defining dependency "mempool" 00:02:41.258 Message: lib/mbuf: Defining dependency "mbuf" 00:02:41.258 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:41.258 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:41.258 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:41.258 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:41.258 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:41.258 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:41.258 Compiler for C supports arguments -mpclmul: YES 00:02:41.258 Compiler for C supports arguments -maes: YES 00:02:41.258 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:41.258 Compiler for C supports arguments -mavx512bw: YES 00:02:41.258 Compiler for C supports arguments -mavx512dq: YES 00:02:41.258 Compiler for C supports arguments -mavx512vl: YES 00:02:41.258 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:41.258 Compiler for C supports arguments -mavx2: YES 00:02:41.258 Compiler for C supports arguments -mavx: YES 00:02:41.258 Message: lib/net: Defining dependency "net" 00:02:41.258 Message: lib/meter: Defining dependency "meter" 00:02:41.258 Message: lib/ethdev: Defining dependency "ethdev" 00:02:41.258 Message: lib/pci: Defining dependency "pci" 00:02:41.258 Message: lib/cmdline: Defining dependency "cmdline" 00:02:41.258 Message: lib/hash: Defining dependency "hash" 00:02:41.258 Message: lib/timer: Defining dependency "timer" 00:02:41.258 Message: lib/compressdev: Defining dependency "compressdev" 00:02:41.258 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:41.258 Message: lib/dmadev: Defining dependency "dmadev" 00:02:41.258 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:41.258 Message: lib/power: Defining dependency "power" 00:02:41.258 Message: lib/reorder: Defining dependency "reorder" 00:02:41.258 Message: lib/security: Defining dependency "security" 00:02:41.258 Has header "linux/userfaultfd.h" : YES 00:02:41.258 Has header "linux/vduse.h" : YES 00:02:41.258 Message: lib/vhost: Defining dependency "vhost" 00:02:41.258 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:41.258 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:41.258 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:41.258 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:41.258 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:41.258 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:41.258 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:41.258 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:41.258 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:41.258 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:41.258 Program doxygen found: YES (/usr/bin/doxygen) 00:02:41.258 Configuring doxy-api-html.conf using configuration 00:02:41.258 Configuring doxy-api-man.conf using configuration 00:02:41.258 Program mandb found: YES (/usr/bin/mandb) 00:02:41.258 Program sphinx-build found: NO 00:02:41.258 Configuring rte_build_config.h using configuration 00:02:41.258 Message: 00:02:41.258 ================= 00:02:41.258 Applications Enabled 00:02:41.258 ================= 00:02:41.258 00:02:41.258 apps: 00:02:41.258 00:02:41.258 00:02:41.258 Message: 00:02:41.258 ================= 00:02:41.258 Libraries Enabled 00:02:41.258 ================= 00:02:41.258 00:02:41.258 libs: 00:02:41.258 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:41.258 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:41.258 cryptodev, dmadev, power, reorder, security, vhost, 00:02:41.258 00:02:41.258 Message: 00:02:41.258 =============== 00:02:41.258 Drivers Enabled 00:02:41.258 =============== 00:02:41.258 00:02:41.258 common: 00:02:41.258 00:02:41.258 bus: 00:02:41.258 pci, vdev, 00:02:41.258 mempool: 00:02:41.258 ring, 00:02:41.258 dma: 00:02:41.258 00:02:41.258 net: 00:02:41.258 00:02:41.258 crypto: 00:02:41.258 00:02:41.258 compress: 00:02:41.258 00:02:41.259 vdpa: 00:02:41.259 00:02:41.259 00:02:41.259 Message: 00:02:41.259 ================= 00:02:41.259 Content Skipped 00:02:41.259 ================= 00:02:41.259 00:02:41.259 apps: 00:02:41.259 dumpcap: explicitly disabled via build config 00:02:41.259 graph: explicitly disabled via build config 00:02:41.259 pdump: explicitly disabled via build config 00:02:41.259 proc-info: explicitly disabled via build config 00:02:41.259 test-acl: explicitly disabled via build config 00:02:41.259 test-bbdev: explicitly disabled via build config 00:02:41.259 test-cmdline: explicitly disabled via build config 00:02:41.259 test-compress-perf: explicitly disabled via build config 00:02:41.259 test-crypto-perf: explicitly disabled via build config 00:02:41.259 test-dma-perf: explicitly disabled via build config 00:02:41.259 test-eventdev: explicitly disabled via build config 00:02:41.259 test-fib: explicitly disabled via build config 00:02:41.259 test-flow-perf: explicitly disabled via build config 00:02:41.259 test-gpudev: explicitly disabled via build config 00:02:41.259 test-mldev: explicitly disabled via build config 00:02:41.259 test-pipeline: explicitly disabled via build config 00:02:41.259 test-pmd: explicitly disabled via build config 00:02:41.259 test-regex: explicitly disabled via build config 00:02:41.259 test-sad: explicitly disabled via build config 00:02:41.259 test-security-perf: explicitly disabled via build config 00:02:41.259 00:02:41.259 libs: 00:02:41.259 metrics: explicitly disabled via build config 00:02:41.259 acl: explicitly disabled via build config 00:02:41.259 bbdev: explicitly disabled via build config 00:02:41.259 bitratestats: explicitly disabled via build config 00:02:41.259 bpf: explicitly disabled via build config 00:02:41.259 cfgfile: explicitly disabled via build config 00:02:41.259 distributor: explicitly disabled via build config 00:02:41.259 efd: explicitly disabled via build config 00:02:41.259 eventdev: explicitly disabled via build config 00:02:41.259 dispatcher: explicitly disabled via build config 00:02:41.259 gpudev: explicitly disabled via build config 00:02:41.259 gro: explicitly disabled via build config 00:02:41.259 gso: explicitly disabled via build config 00:02:41.259 ip_frag: explicitly disabled via build config 00:02:41.259 jobstats: explicitly disabled via build config 00:02:41.259 latencystats: explicitly disabled via build config 00:02:41.259 lpm: explicitly disabled via build config 00:02:41.259 member: explicitly disabled via build config 00:02:41.259 pcapng: explicitly disabled via build config 00:02:41.259 rawdev: explicitly disabled via build config 00:02:41.259 regexdev: explicitly disabled via build config 00:02:41.259 mldev: explicitly disabled via build config 00:02:41.259 rib: explicitly disabled via build config 00:02:41.259 sched: explicitly disabled via build config 00:02:41.259 stack: explicitly disabled via build config 00:02:41.259 ipsec: explicitly disabled via build config 00:02:41.259 pdcp: explicitly disabled via build config 00:02:41.259 fib: explicitly disabled via build config 00:02:41.259 port: explicitly disabled via build config 00:02:41.259 pdump: explicitly disabled via build config 00:02:41.259 table: explicitly disabled via build config 00:02:41.259 pipeline: explicitly disabled via build config 00:02:41.259 graph: explicitly disabled via build config 00:02:41.259 node: explicitly disabled via build config 00:02:41.259 00:02:41.259 drivers: 00:02:41.259 common/cpt: not in enabled drivers build config 00:02:41.259 common/dpaax: not in enabled drivers build config 00:02:41.259 common/iavf: not in enabled drivers build config 00:02:41.259 common/idpf: not in enabled drivers build config 00:02:41.259 common/mvep: not in enabled drivers build config 00:02:41.259 common/octeontx: not in enabled drivers build config 00:02:41.259 bus/auxiliary: not in enabled drivers build config 00:02:41.259 bus/cdx: not in enabled drivers build config 00:02:41.259 bus/dpaa: not in enabled drivers build config 00:02:41.259 bus/fslmc: not in enabled drivers build config 00:02:41.259 bus/ifpga: not in enabled drivers build config 00:02:41.259 bus/platform: not in enabled drivers build config 00:02:41.259 bus/vmbus: not in enabled drivers build config 00:02:41.259 common/cnxk: not in enabled drivers build config 00:02:41.259 common/mlx5: not in enabled drivers build config 00:02:41.259 common/nfp: not in enabled drivers build config 00:02:41.259 common/qat: not in enabled drivers build config 00:02:41.259 common/sfc_efx: not in enabled drivers build config 00:02:41.259 mempool/bucket: not in enabled drivers build config 00:02:41.259 mempool/cnxk: not in enabled drivers build config 00:02:41.259 mempool/dpaa: not in enabled drivers build config 00:02:41.259 mempool/dpaa2: not in enabled drivers build config 00:02:41.259 mempool/octeontx: not in enabled drivers build config 00:02:41.259 mempool/stack: not in enabled drivers build config 00:02:41.259 dma/cnxk: not in enabled drivers build config 00:02:41.259 dma/dpaa: not in enabled drivers build config 00:02:41.259 dma/dpaa2: not in enabled drivers build config 00:02:41.259 dma/hisilicon: not in enabled drivers build config 00:02:41.259 dma/idxd: not in enabled drivers build config 00:02:41.259 dma/ioat: not in enabled drivers build config 00:02:41.259 dma/skeleton: not in enabled drivers build config 00:02:41.259 net/af_packet: not in enabled drivers build config 00:02:41.259 net/af_xdp: not in enabled drivers build config 00:02:41.259 net/ark: not in enabled drivers build config 00:02:41.259 net/atlantic: not in enabled drivers build config 00:02:41.259 net/avp: not in enabled drivers build config 00:02:41.259 net/axgbe: not in enabled drivers build config 00:02:41.259 net/bnx2x: not in enabled drivers build config 00:02:41.259 net/bnxt: not in enabled drivers build config 00:02:41.259 net/bonding: not in enabled drivers build config 00:02:41.259 net/cnxk: not in enabled drivers build config 00:02:41.259 net/cpfl: not in enabled drivers build config 00:02:41.259 net/cxgbe: not in enabled drivers build config 00:02:41.259 net/dpaa: not in enabled drivers build config 00:02:41.259 net/dpaa2: not in enabled drivers build config 00:02:41.259 net/e1000: not in enabled drivers build config 00:02:41.259 net/ena: not in enabled drivers build config 00:02:41.259 net/enetc: not in enabled drivers build config 00:02:41.259 net/enetfec: not in enabled drivers build config 00:02:41.259 net/enic: not in enabled drivers build config 00:02:41.259 net/failsafe: not in enabled drivers build config 00:02:41.259 net/fm10k: not in enabled drivers build config 00:02:41.259 net/gve: not in enabled drivers build config 00:02:41.259 net/hinic: not in enabled drivers build config 00:02:41.259 net/hns3: not in enabled drivers build config 00:02:41.259 net/i40e: not in enabled drivers build config 00:02:41.259 net/iavf: not in enabled drivers build config 00:02:41.259 net/ice: not in enabled drivers build config 00:02:41.259 net/idpf: not in enabled drivers build config 00:02:41.259 net/igc: not in enabled drivers build config 00:02:41.259 net/ionic: not in enabled drivers build config 00:02:41.259 net/ipn3ke: not in enabled drivers build config 00:02:41.259 net/ixgbe: not in enabled drivers build config 00:02:41.259 net/mana: not in enabled drivers build config 00:02:41.259 net/memif: not in enabled drivers build config 00:02:41.259 net/mlx4: not in enabled drivers build config 00:02:41.259 net/mlx5: not in enabled drivers build config 00:02:41.259 net/mvneta: not in enabled drivers build config 00:02:41.259 net/mvpp2: not in enabled drivers build config 00:02:41.259 net/netvsc: not in enabled drivers build config 00:02:41.259 net/nfb: not in enabled drivers build config 00:02:41.259 net/nfp: not in enabled drivers build config 00:02:41.259 net/ngbe: not in enabled drivers build config 00:02:41.259 net/null: not in enabled drivers build config 00:02:41.259 net/octeontx: not in enabled drivers build config 00:02:41.259 net/octeon_ep: not in enabled drivers build config 00:02:41.259 net/pcap: not in enabled drivers build config 00:02:41.259 net/pfe: not in enabled drivers build config 00:02:41.259 net/qede: not in enabled drivers build config 00:02:41.259 net/ring: not in enabled drivers build config 00:02:41.259 net/sfc: not in enabled drivers build config 00:02:41.259 net/softnic: not in enabled drivers build config 00:02:41.259 net/tap: not in enabled drivers build config 00:02:41.259 net/thunderx: not in enabled drivers build config 00:02:41.259 net/txgbe: not in enabled drivers build config 00:02:41.259 net/vdev_netvsc: not in enabled drivers build config 00:02:41.259 net/vhost: not in enabled drivers build config 00:02:41.259 net/virtio: not in enabled drivers build config 00:02:41.259 net/vmxnet3: not in enabled drivers build config 00:02:41.259 raw/*: missing internal dependency, "rawdev" 00:02:41.259 crypto/armv8: not in enabled drivers build config 00:02:41.259 crypto/bcmfs: not in enabled drivers build config 00:02:41.259 crypto/caam_jr: not in enabled drivers build config 00:02:41.259 crypto/ccp: not in enabled drivers build config 00:02:41.259 crypto/cnxk: not in enabled drivers build config 00:02:41.259 crypto/dpaa_sec: not in enabled drivers build config 00:02:41.259 crypto/dpaa2_sec: not in enabled drivers build config 00:02:41.259 crypto/ipsec_mb: not in enabled drivers build config 00:02:41.259 crypto/mlx5: not in enabled drivers build config 00:02:41.259 crypto/mvsam: not in enabled drivers build config 00:02:41.259 crypto/nitrox: not in enabled drivers build config 00:02:41.259 crypto/null: not in enabled drivers build config 00:02:41.259 crypto/octeontx: not in enabled drivers build config 00:02:41.259 crypto/openssl: not in enabled drivers build config 00:02:41.259 crypto/scheduler: not in enabled drivers build config 00:02:41.259 crypto/uadk: not in enabled drivers build config 00:02:41.259 crypto/virtio: not in enabled drivers build config 00:02:41.259 compress/isal: not in enabled drivers build config 00:02:41.259 compress/mlx5: not in enabled drivers build config 00:02:41.259 compress/octeontx: not in enabled drivers build config 00:02:41.259 compress/zlib: not in enabled drivers build config 00:02:41.259 regex/*: missing internal dependency, "regexdev" 00:02:41.259 ml/*: missing internal dependency, "mldev" 00:02:41.259 vdpa/ifc: not in enabled drivers build config 00:02:41.259 vdpa/mlx5: not in enabled drivers build config 00:02:41.259 vdpa/nfp: not in enabled drivers build config 00:02:41.259 vdpa/sfc: not in enabled drivers build config 00:02:41.259 event/*: missing internal dependency, "eventdev" 00:02:41.259 baseband/*: missing internal dependency, "bbdev" 00:02:41.259 gpu/*: missing internal dependency, "gpudev" 00:02:41.260 00:02:41.260 00:02:41.260 Build targets in project: 85 00:02:41.260 00:02:41.260 DPDK 23.11.0 00:02:41.260 00:02:41.260 User defined options 00:02:41.260 buildtype : debug 00:02:41.260 default_library : static 00:02:41.260 libdir : lib 00:02:41.260 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:41.260 b_sanitize : address 00:02:41.260 c_args : -Wno-stringop-overflow -fcommon -fPIC -Werror 00:02:41.260 c_link_args : 00:02:41.260 cpu_instruction_set: native 00:02:41.260 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:02:41.260 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:02:41.260 enable_docs : false 00:02:41.260 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:41.260 enable_kmods : false 00:02:41.260 tests : false 00:02:41.260 00:02:41.260 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.260 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:41.260 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:41.260 [2/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:41.260 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:41.260 [4/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:41.260 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:41.260 [6/265] Linking static target lib/librte_log.a 00:02:41.260 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:41.260 [8/265] Linking static target lib/librte_kvargs.a 00:02:41.260 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:41.260 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:41.260 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:41.260 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:41.260 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:41.260 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:41.260 [15/265] Linking static target lib/librte_telemetry.a 00:02:41.260 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:41.260 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:41.260 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:41.260 [19/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.260 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:41.260 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:41.260 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:41.260 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:41.260 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:41.260 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:41.260 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:41.260 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:41.520 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:41.520 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:41.520 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:41.520 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:41.520 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:41.520 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:41.779 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:41.779 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:41.779 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:41.779 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:41.779 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:41.779 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:41.779 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:41.779 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:42.037 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:42.037 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:42.037 [44/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.037 [45/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.037 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:42.037 [47/265] Linking target lib/librte_log.so.24.0 00:02:42.037 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:42.037 [49/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:42.296 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:42.296 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:42.296 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:42.296 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:42.296 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.296 [55/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:42.296 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:42.296 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:42.296 [58/265] Linking target lib/librte_kvargs.so.24.0 00:02:42.616 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:42.616 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:42.616 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:42.616 [62/265] Linking target lib/librte_telemetry.so.24.0 00:02:42.616 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:42.616 [64/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:42.616 [65/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:42.616 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.616 [67/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:42.896 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.896 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.896 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:42.896 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.896 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:42.896 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.896 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.896 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:42.896 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:42.896 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.896 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:42.896 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:43.154 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:43.154 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.154 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.154 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.413 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:43.413 [85/265] Linking static target lib/librte_ring.a 00:02:43.413 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:43.413 [87/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:43.413 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:43.671 [89/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:43.671 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:43.671 [91/265] Linking static target lib/librte_eal.a 00:02:43.671 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:43.671 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:43.671 [94/265] Linking static target lib/librte_mempool.a 00:02:43.671 [95/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:43.671 [96/265] Linking static target lib/librte_rcu.a 00:02:43.929 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:43.929 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:43.929 [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:43.929 [100/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:43.929 [101/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.929 [102/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:43.929 [103/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.188 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.188 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:44.188 [106/265] Linking static target lib/librte_net.a 00:02:44.188 [107/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.188 [108/265] Linking static target lib/librte_mbuf.a 00:02:44.188 [109/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.188 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.188 [111/265] Linking static target lib/librte_meter.a 00:02:44.188 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:44.447 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:44.447 [114/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.447 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.447 [116/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.447 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:44.705 [118/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.705 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.705 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.963 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:44.964 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.964 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.964 [124/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.964 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.964 [126/265] Linking static target lib/librte_pci.a 00:02:44.964 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:45.221 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:45.221 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:45.221 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:45.221 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:45.221 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:45.221 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:45.221 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:45.221 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:45.221 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:45.221 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:45.221 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:45.221 [139/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.221 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:45.479 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:45.479 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.479 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:45.479 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:45.479 [145/265] Linking static target lib/librte_cmdline.a 00:02:45.737 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:45.737 [147/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:45.737 [148/265] Linking static target lib/librte_timer.a 00:02:45.737 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.737 [150/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.737 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:45.737 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:45.995 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:45.995 [154/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.278 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.278 [156/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:46.278 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:46.278 [158/265] Linking static target lib/librte_compressdev.a 00:02:46.278 [159/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:46.278 [160/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:46.278 [161/265] Linking static target lib/librte_dmadev.a 00:02:46.278 [162/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:46.278 [163/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:46.278 [164/265] Linking static target lib/librte_hash.a 00:02:46.278 [165/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:46.278 [166/265] Linking static target lib/librte_ethdev.a 00:02:46.539 [167/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:46.539 [168/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.539 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:46.539 [170/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:46.539 [171/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:46.539 [172/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.798 [173/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.798 [174/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:46.798 [175/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:46.798 [176/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:46.798 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:46.798 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:47.056 [179/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:47.056 [180/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.056 [181/265] Linking static target lib/librte_cryptodev.a 00:02:47.056 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:47.056 [183/265] Linking static target lib/librte_power.a 00:02:47.313 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:47.313 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:47.314 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:47.314 [187/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:47.314 [188/265] Linking static target lib/librte_reorder.a 00:02:47.571 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:47.571 [190/265] Linking static target lib/librte_security.a 00:02:47.829 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:47.829 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.829 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.086 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.086 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:48.086 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:48.086 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:48.086 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:48.344 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:48.344 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:48.344 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:48.344 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:48.344 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:48.603 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:48.603 [205/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:48.603 [206/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.603 [207/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:48.603 [208/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:48.603 [209/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:48.861 [210/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:48.861 [211/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:48.861 [212/265] Linking static target drivers/librte_bus_pci.a 00:02:48.861 [213/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:48.861 [214/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.861 [215/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.861 [216/265] Linking static target drivers/librte_bus_vdev.a 00:02:49.119 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:49.119 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:49.119 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.119 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:49.119 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.119 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.119 [223/265] Linking static target drivers/librte_mempool_ring.a 00:02:49.377 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.749 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:53.275 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.275 [227/265] Linking target lib/librte_eal.so.24.0 00:02:53.275 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:53.275 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.275 [230/265] Linking target lib/librte_meter.so.24.0 00:02:53.275 [231/265] Linking target lib/librte_pci.so.24.0 00:02:53.275 [232/265] Linking target lib/librte_dmadev.so.24.0 00:02:53.275 [233/265] Linking target lib/librte_ring.so.24.0 00:02:53.275 [234/265] Linking target lib/librte_timer.so.24.0 00:02:53.275 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:53.275 [236/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:53.275 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:53.275 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:53.275 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:53.275 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:53.275 [241/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:53.275 [242/265] Linking target lib/librte_mempool.so.24.0 00:02:53.275 [243/265] Linking target lib/librte_rcu.so.24.0 00:02:53.533 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:53.533 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:53.533 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:53.533 [247/265] Linking target lib/librte_mbuf.so.24.0 00:02:53.792 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:53.792 [249/265] Linking target lib/librte_reorder.so.24.0 00:02:53.792 [250/265] Linking target lib/librte_compressdev.so.24.0 00:02:53.792 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:02:53.792 [252/265] Linking target lib/librte_net.so.24.0 00:02:53.792 [253/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:53.792 [254/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:54.051 [255/265] Linking target lib/librte_security.so.24.0 00:02:54.051 [256/265] Linking target lib/librte_hash.so.24.0 00:02:54.051 [257/265] Linking target lib/librte_cmdline.so.24.0 00:02:54.051 [258/265] Linking target lib/librte_ethdev.so.24.0 00:02:54.051 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:54.051 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:54.309 [261/265] Linking target lib/librte_power.so.24.0 00:02:54.568 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:54.568 [263/265] Linking static target lib/librte_vhost.a 00:02:57.103 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.103 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:57.103 INFO: autodetecting backend as ninja 00:02:57.103 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:58.036 CC lib/ut/ut.o 00:02:58.036 CC lib/ut_mock/mock.o 00:02:58.036 CC lib/log/log.o 00:02:58.036 CC lib/log/log_deprecated.o 00:02:58.036 CC lib/log/log_flags.o 00:02:58.036 LIB libspdk_ut_mock.a 00:02:58.294 LIB libspdk_ut.a 00:02:58.294 LIB libspdk_log.a 00:02:58.553 CXX lib/trace_parser/trace.o 00:02:58.553 CC lib/ioat/ioat.o 00:02:58.553 CC lib/dma/dma.o 00:02:58.553 CC lib/util/base64.o 00:02:58.553 CC lib/util/cpuset.o 00:02:58.553 CC lib/util/bit_array.o 00:02:58.553 CC lib/util/crc32.o 00:02:58.553 CC lib/util/crc16.o 00:02:58.553 CC lib/util/crc32c.o 00:02:58.553 CC lib/vfio_user/host/vfio_user_pci.o 00:02:58.553 CC lib/util/crc32_ieee.o 00:02:58.553 CC lib/util/crc64.o 00:02:58.811 CC lib/util/dif.o 00:02:58.811 CC lib/util/fd.o 00:02:58.811 LIB libspdk_dma.a 00:02:58.811 CC lib/util/file.o 00:02:58.811 CC lib/util/hexlify.o 00:02:58.811 CC lib/util/iov.o 00:02:58.811 CC lib/util/math.o 00:02:58.811 CC lib/util/pipe.o 00:02:58.811 CC lib/util/strerror_tls.o 00:02:58.811 CC lib/util/string.o 00:02:58.811 CC lib/vfio_user/host/vfio_user.o 00:02:59.068 CC lib/util/uuid.o 00:02:59.069 CC lib/util/fd_group.o 00:02:59.069 CC lib/util/xor.o 00:02:59.069 LIB libspdk_ioat.a 00:02:59.069 CC lib/util/zipf.o 00:02:59.069 LIB libspdk_vfio_user.a 00:02:59.326 LIB libspdk_util.a 00:02:59.584 CC lib/vmd/vmd.o 00:02:59.584 CC lib/vmd/led.o 00:02:59.584 CC lib/env_dpdk/env.o 00:02:59.584 CC lib/json/json_parse.o 00:02:59.584 CC lib/env_dpdk/memory.o 00:02:59.584 CC lib/json/json_util.o 00:02:59.584 CC lib/conf/conf.o 00:02:59.584 CC lib/idxd/idxd.o 00:02:59.584 CC lib/rdma/common.o 00:02:59.584 LIB libspdk_trace_parser.a 00:02:59.842 CC lib/idxd/idxd_user.o 00:02:59.842 CC lib/rdma/rdma_verbs.o 00:02:59.842 LIB libspdk_conf.a 00:02:59.842 CC lib/json/json_write.o 00:02:59.842 CC lib/env_dpdk/pci.o 00:02:59.842 CC lib/env_dpdk/init.o 00:03:00.100 CC lib/env_dpdk/threads.o 00:03:00.100 CC lib/env_dpdk/pci_ioat.o 00:03:00.100 LIB libspdk_rdma.a 00:03:00.100 CC lib/env_dpdk/pci_virtio.o 00:03:00.100 CC lib/env_dpdk/pci_vmd.o 00:03:00.100 CC lib/env_dpdk/pci_idxd.o 00:03:00.100 LIB libspdk_json.a 00:03:00.360 CC lib/env_dpdk/pci_event.o 00:03:00.360 CC lib/env_dpdk/sigbus_handler.o 00:03:00.360 LIB libspdk_idxd.a 00:03:00.360 CC lib/env_dpdk/pci_dpdk.o 00:03:00.360 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:00.360 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:00.360 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:00.360 CC lib/jsonrpc/jsonrpc_server.o 00:03:00.360 CC lib/jsonrpc/jsonrpc_client.o 00:03:00.360 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:00.360 LIB libspdk_vmd.a 00:03:00.618 LIB libspdk_jsonrpc.a 00:03:00.877 CC lib/rpc/rpc.o 00:03:01.136 LIB libspdk_env_dpdk.a 00:03:01.136 LIB libspdk_rpc.a 00:03:01.395 CC lib/trace/trace.o 00:03:01.395 CC lib/trace/trace_flags.o 00:03:01.395 CC lib/trace/trace_rpc.o 00:03:01.395 CC lib/notify/notify.o 00:03:01.395 CC lib/notify/notify_rpc.o 00:03:01.395 CC lib/keyring/keyring.o 00:03:01.395 CC lib/keyring/keyring_rpc.o 00:03:01.654 LIB libspdk_notify.a 00:03:01.654 LIB libspdk_keyring.a 00:03:01.654 LIB libspdk_trace.a 00:03:02.254 CC lib/thread/thread.o 00:03:02.254 CC lib/thread/iobuf.o 00:03:02.254 CC lib/sock/sock_rpc.o 00:03:02.254 CC lib/sock/sock.o 00:03:02.822 LIB libspdk_sock.a 00:03:03.080 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:03.080 CC lib/nvme/nvme_ctrlr.o 00:03:03.080 CC lib/nvme/nvme_ns.o 00:03:03.080 CC lib/nvme/nvme_pcie_common.o 00:03:03.080 CC lib/nvme/nvme_ns_cmd.o 00:03:03.080 CC lib/nvme/nvme_fabric.o 00:03:03.080 CC lib/nvme/nvme_pcie.o 00:03:03.080 CC lib/nvme/nvme.o 00:03:03.080 CC lib/nvme/nvme_qpair.o 00:03:03.647 CC lib/nvme/nvme_quirks.o 00:03:03.647 LIB libspdk_thread.a 00:03:03.647 CC lib/nvme/nvme_transport.o 00:03:03.647 CC lib/nvme/nvme_discovery.o 00:03:03.647 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:03.647 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:03.904 CC lib/nvme/nvme_tcp.o 00:03:03.904 CC lib/nvme/nvme_opal.o 00:03:03.904 CC lib/nvme/nvme_io_msg.o 00:03:03.904 CC lib/nvme/nvme_poll_group.o 00:03:04.162 CC lib/nvme/nvme_zns.o 00:03:04.162 CC lib/nvme/nvme_stubs.o 00:03:04.420 CC lib/accel/accel.o 00:03:04.420 CC lib/blob/blobstore.o 00:03:04.420 CC lib/blob/request.o 00:03:04.420 CC lib/blob/zeroes.o 00:03:04.420 CC lib/blob/blob_bs_dev.o 00:03:04.678 CC lib/nvme/nvme_auth.o 00:03:04.678 CC lib/nvme/nvme_cuse.o 00:03:04.678 CC lib/nvme/nvme_rdma.o 00:03:04.678 CC lib/accel/accel_rpc.o 00:03:04.678 CC lib/init/json_config.o 00:03:04.936 CC lib/virtio/virtio.o 00:03:04.936 CC lib/accel/accel_sw.o 00:03:04.936 CC lib/init/subsystem.o 00:03:05.194 CC lib/virtio/virtio_vhost_user.o 00:03:05.194 CC lib/init/subsystem_rpc.o 00:03:05.194 CC lib/virtio/virtio_vfio_user.o 00:03:05.452 CC lib/init/rpc.o 00:03:05.452 CC lib/virtio/virtio_pci.o 00:03:05.452 LIB libspdk_accel.a 00:03:05.452 LIB libspdk_init.a 00:03:05.710 LIB libspdk_virtio.a 00:03:05.710 CC lib/bdev/bdev.o 00:03:05.710 CC lib/bdev/bdev_rpc.o 00:03:05.710 CC lib/bdev/scsi_nvme.o 00:03:05.710 CC lib/bdev/part.o 00:03:05.710 CC lib/bdev/bdev_zone.o 00:03:05.710 CC lib/event/app.o 00:03:05.710 CC lib/event/log_rpc.o 00:03:05.710 CC lib/event/reactor.o 00:03:05.967 CC lib/event/app_rpc.o 00:03:05.967 CC lib/event/scheduler_static.o 00:03:05.967 LIB libspdk_nvme.a 00:03:06.226 LIB libspdk_event.a 00:03:08.130 LIB libspdk_blob.a 00:03:08.130 CC lib/lvol/lvol.o 00:03:08.130 CC lib/blobfs/blobfs.o 00:03:08.130 CC lib/blobfs/tree.o 00:03:09.064 LIB libspdk_bdev.a 00:03:09.064 CC lib/scsi/dev.o 00:03:09.064 CC lib/scsi/lun.o 00:03:09.064 CC lib/scsi/port.o 00:03:09.064 CC lib/scsi/scsi_bdev.o 00:03:09.064 CC lib/scsi/scsi.o 00:03:09.064 CC lib/nbd/nbd.o 00:03:09.064 CC lib/ftl/ftl_core.o 00:03:09.064 CC lib/nvmf/ctrlr.o 00:03:09.064 LIB libspdk_blobfs.a 00:03:09.064 CC lib/nvmf/ctrlr_discovery.o 00:03:09.322 LIB libspdk_lvol.a 00:03:09.322 CC lib/nvmf/ctrlr_bdev.o 00:03:09.322 CC lib/scsi/scsi_pr.o 00:03:09.322 CC lib/nbd/nbd_rpc.o 00:03:09.322 CC lib/scsi/scsi_rpc.o 00:03:09.582 CC lib/ftl/ftl_init.o 00:03:09.582 CC lib/ftl/ftl_layout.o 00:03:09.582 CC lib/ftl/ftl_debug.o 00:03:09.582 CC lib/scsi/task.o 00:03:09.582 LIB libspdk_nbd.a 00:03:09.582 CC lib/ftl/ftl_io.o 00:03:09.840 CC lib/nvmf/subsystem.o 00:03:09.840 CC lib/nvmf/nvmf.o 00:03:09.840 CC lib/nvmf/nvmf_rpc.o 00:03:09.840 CC lib/nvmf/transport.o 00:03:09.840 CC lib/ftl/ftl_sb.o 00:03:09.840 LIB libspdk_scsi.a 00:03:09.840 CC lib/ftl/ftl_l2p.o 00:03:09.840 CC lib/ftl/ftl_l2p_flat.o 00:03:10.099 CC lib/ftl/ftl_nv_cache.o 00:03:10.099 CC lib/nvmf/tcp.o 00:03:10.099 CC lib/ftl/ftl_band.o 00:03:10.099 CC lib/ftl/ftl_band_ops.o 00:03:10.099 CC lib/nvmf/rdma.o 00:03:10.666 CC lib/ftl/ftl_writer.o 00:03:10.666 CC lib/ftl/ftl_rq.o 00:03:10.666 CC lib/iscsi/conn.o 00:03:10.666 CC lib/iscsi/init_grp.o 00:03:10.924 CC lib/ftl/ftl_reloc.o 00:03:10.924 CC lib/iscsi/iscsi.o 00:03:10.924 CC lib/vhost/vhost.o 00:03:10.924 CC lib/ftl/ftl_l2p_cache.o 00:03:11.182 CC lib/ftl/ftl_p2l.o 00:03:11.182 CC lib/ftl/mngt/ftl_mngt.o 00:03:11.182 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:11.182 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:11.440 CC lib/iscsi/md5.o 00:03:11.440 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:11.440 CC lib/iscsi/param.o 00:03:11.440 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:11.440 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:11.440 CC lib/vhost/vhost_rpc.o 00:03:11.699 CC lib/vhost/vhost_scsi.o 00:03:11.699 CC lib/iscsi/portal_grp.o 00:03:11.699 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:11.699 CC lib/iscsi/tgt_node.o 00:03:11.699 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:11.699 CC lib/iscsi/iscsi_subsystem.o 00:03:11.958 CC lib/iscsi/iscsi_rpc.o 00:03:11.958 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:11.958 CC lib/iscsi/task.o 00:03:11.958 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:11.958 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:12.216 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:12.216 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:12.216 CC lib/vhost/vhost_blk.o 00:03:12.216 CC lib/ftl/utils/ftl_conf.o 00:03:12.216 CC lib/vhost/rte_vhost_user.o 00:03:12.216 CC lib/ftl/utils/ftl_md.o 00:03:12.216 CC lib/ftl/utils/ftl_mempool.o 00:03:12.475 CC lib/ftl/utils/ftl_bitmap.o 00:03:12.475 CC lib/ftl/utils/ftl_property.o 00:03:12.475 LIB libspdk_iscsi.a 00:03:12.475 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:12.475 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:12.475 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:12.733 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:12.733 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:12.733 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:12.733 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:12.733 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:12.733 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:12.733 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:12.733 CC lib/ftl/base/ftl_base_dev.o 00:03:12.733 CC lib/ftl/base/ftl_base_bdev.o 00:03:12.992 CC lib/ftl/ftl_trace.o 00:03:12.992 LIB libspdk_nvmf.a 00:03:13.250 LIB libspdk_ftl.a 00:03:13.250 LIB libspdk_vhost.a 00:03:13.816 CC module/env_dpdk/env_dpdk_rpc.o 00:03:13.816 CC module/scheduler/gscheduler/gscheduler.o 00:03:13.816 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:13.816 CC module/keyring/file/keyring.o 00:03:13.816 CC module/blob/bdev/blob_bdev.o 00:03:13.816 CC module/keyring/linux/keyring.o 00:03:13.816 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:13.816 CC module/accel/ioat/accel_ioat.o 00:03:13.816 CC module/accel/error/accel_error.o 00:03:13.816 CC module/sock/posix/posix.o 00:03:13.816 LIB libspdk_env_dpdk_rpc.a 00:03:13.816 CC module/accel/ioat/accel_ioat_rpc.o 00:03:14.074 CC module/keyring/file/keyring_rpc.o 00:03:14.074 LIB libspdk_scheduler_gscheduler.a 00:03:14.074 LIB libspdk_scheduler_dpdk_governor.a 00:03:14.074 CC module/keyring/linux/keyring_rpc.o 00:03:14.074 CC module/accel/error/accel_error_rpc.o 00:03:14.074 LIB libspdk_scheduler_dynamic.a 00:03:14.074 LIB libspdk_accel_ioat.a 00:03:14.074 LIB libspdk_blob_bdev.a 00:03:14.074 LIB libspdk_keyring_file.a 00:03:14.074 LIB libspdk_keyring_linux.a 00:03:14.074 LIB libspdk_accel_error.a 00:03:14.074 CC module/accel/dsa/accel_dsa.o 00:03:14.074 CC module/accel/dsa/accel_dsa_rpc.o 00:03:14.332 CC module/accel/iaa/accel_iaa.o 00:03:14.332 CC module/blobfs/bdev/blobfs_bdev.o 00:03:14.332 CC module/bdev/lvol/vbdev_lvol.o 00:03:14.332 CC module/bdev/malloc/bdev_malloc.o 00:03:14.332 CC module/bdev/delay/vbdev_delay.o 00:03:14.332 CC module/bdev/gpt/gpt.o 00:03:14.332 CC module/bdev/error/vbdev_error.o 00:03:14.332 LIB libspdk_accel_dsa.a 00:03:14.591 CC module/accel/iaa/accel_iaa_rpc.o 00:03:14.591 CC module/bdev/gpt/vbdev_gpt.o 00:03:14.591 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:14.591 CC module/bdev/null/bdev_null.o 00:03:14.591 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:14.591 LIB libspdk_accel_iaa.a 00:03:14.591 CC module/bdev/null/bdev_null_rpc.o 00:03:14.591 CC module/bdev/error/vbdev_error_rpc.o 00:03:14.591 LIB libspdk_sock_posix.a 00:03:14.591 LIB libspdk_blobfs_bdev.a 00:03:14.849 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:14.849 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:14.849 LIB libspdk_bdev_error.a 00:03:14.849 LIB libspdk_bdev_null.a 00:03:14.849 LIB libspdk_bdev_gpt.a 00:03:14.849 CC module/bdev/nvme/bdev_nvme.o 00:03:14.849 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:14.849 CC module/bdev/nvme/nvme_rpc.o 00:03:14.849 LIB libspdk_bdev_lvol.a 00:03:14.849 LIB libspdk_bdev_delay.a 00:03:14.849 LIB libspdk_bdev_malloc.a 00:03:15.107 CC module/bdev/passthru/vbdev_passthru.o 00:03:15.107 CC module/bdev/raid/bdev_raid.o 00:03:15.107 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:15.107 CC module/bdev/split/vbdev_split.o 00:03:15.107 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:15.107 CC module/bdev/aio/bdev_aio.o 00:03:15.107 CC module/bdev/ftl/bdev_ftl.o 00:03:15.107 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:15.107 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.364 LIB libspdk_bdev_passthru.a 00:03:15.364 CC module/bdev/split/vbdev_split_rpc.o 00:03:15.364 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:15.364 LIB libspdk_bdev_ftl.a 00:03:15.364 LIB libspdk_bdev_aio.a 00:03:15.622 CC module/bdev/raid/bdev_raid_rpc.o 00:03:15.622 CC module/bdev/raid/bdev_raid_sb.o 00:03:15.622 CC module/bdev/raid/raid0.o 00:03:15.622 CC module/bdev/iscsi/bdev_iscsi.o 00:03:15.622 LIB libspdk_bdev_zone_block.a 00:03:15.622 LIB libspdk_bdev_split.a 00:03:15.622 CC module/bdev/nvme/bdev_mdns_client.o 00:03:15.622 CC module/bdev/nvme/vbdev_opal.o 00:03:15.622 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:15.622 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:15.622 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:15.879 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:15.879 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:15.879 CC module/bdev/raid/raid1.o 00:03:15.879 CC module/bdev/raid/concat.o 00:03:15.879 CC module/bdev/raid/raid5f.o 00:03:15.879 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:16.138 LIB libspdk_bdev_iscsi.a 00:03:16.138 LIB libspdk_bdev_virtio.a 00:03:16.396 LIB libspdk_bdev_raid.a 00:03:17.769 LIB libspdk_bdev_nvme.a 00:03:18.335 CC module/event/subsystems/keyring/keyring.o 00:03:18.335 CC module/event/subsystems/sock/sock.o 00:03:18.335 CC module/event/subsystems/vmd/vmd.o 00:03:18.335 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:18.335 CC module/event/subsystems/scheduler/scheduler.o 00:03:18.335 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:18.335 CC module/event/subsystems/iobuf/iobuf.o 00:03:18.335 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:18.335 LIB libspdk_event_sock.a 00:03:18.335 LIB libspdk_event_keyring.a 00:03:18.335 LIB libspdk_event_vhost_blk.a 00:03:18.335 LIB libspdk_event_scheduler.a 00:03:18.335 LIB libspdk_event_vmd.a 00:03:18.335 LIB libspdk_event_iobuf.a 00:03:18.903 CC module/event/subsystems/accel/accel.o 00:03:18.903 LIB libspdk_event_accel.a 00:03:19.160 CC module/event/subsystems/bdev/bdev.o 00:03:19.418 LIB libspdk_event_bdev.a 00:03:19.675 CC module/event/subsystems/scsi/scsi.o 00:03:19.675 CC module/event/subsystems/nbd/nbd.o 00:03:19.675 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:19.675 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:19.933 LIB libspdk_event_nbd.a 00:03:19.933 LIB libspdk_event_scsi.a 00:03:19.933 LIB libspdk_event_nvmf.a 00:03:20.191 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:20.191 CC module/event/subsystems/iscsi/iscsi.o 00:03:20.448 LIB libspdk_event_vhost_scsi.a 00:03:20.448 LIB libspdk_event_iscsi.a 00:03:20.705 CC app/trace_record/trace_record.o 00:03:20.705 CXX app/trace/trace.o 00:03:20.705 CC app/spdk_nvme_perf/perf.o 00:03:20.705 CC app/spdk_lspci/spdk_lspci.o 00:03:20.705 CC app/spdk_nvme_identify/identify.o 00:03:20.705 CC app/nvmf_tgt/nvmf_main.o 00:03:20.705 CC app/spdk_tgt/spdk_tgt.o 00:03:20.705 CC examples/accel/perf/accel_perf.o 00:03:20.705 CC app/iscsi_tgt/iscsi_tgt.o 00:03:20.965 CC test/accel/dif/dif.o 00:03:20.965 LINK spdk_lspci 00:03:20.965 LINK nvmf_tgt 00:03:20.965 LINK spdk_tgt 00:03:20.965 LINK spdk_trace_record 00:03:21.232 LINK iscsi_tgt 00:03:21.232 LINK spdk_trace 00:03:21.232 LINK accel_perf 00:03:21.490 LINK dif 00:03:21.748 LINK spdk_nvme_perf 00:03:21.748 LINK spdk_nvme_identify 00:03:21.748 CC test/app/bdev_svc/bdev_svc.o 00:03:21.748 CC test/bdev/bdevio/bdevio.o 00:03:22.006 LINK bdev_svc 00:03:22.264 LINK bdevio 00:03:22.522 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:22.522 CC examples/bdev/hello_world/hello_bdev.o 00:03:22.779 LINK hello_bdev 00:03:22.779 LINK nvme_fuzz 00:03:23.039 CC test/app/histogram_perf/histogram_perf.o 00:03:23.039 CC examples/blob/hello_world/hello_blob.o 00:03:23.039 LINK histogram_perf 00:03:23.297 LINK hello_blob 00:03:23.863 CC examples/ioat/perf/perf.o 00:03:24.153 LINK ioat_perf 00:03:24.430 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:24.687 CC examples/bdev/bdevperf/bdevperf.o 00:03:24.687 CC examples/sock/hello_world/hello_sock.o 00:03:24.687 CC examples/nvme/hello_world/hello_world.o 00:03:24.943 CC examples/ioat/verify/verify.o 00:03:24.943 LINK hello_world 00:03:25.202 LINK hello_sock 00:03:25.202 LINK verify 00:03:25.202 CC app/spdk_nvme_discover/discovery_aer.o 00:03:25.202 CC app/spdk_top/spdk_top.o 00:03:25.459 LINK spdk_nvme_discover 00:03:25.459 LINK bdevperf 00:03:25.719 CC examples/vmd/lsvmd/lsvmd.o 00:03:25.719 LINK lsvmd 00:03:25.719 CC examples/vmd/led/led.o 00:03:25.978 LINK led 00:03:25.978 CC examples/nvme/reconnect/reconnect.o 00:03:25.978 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:26.236 LINK spdk_top 00:03:26.494 LINK reconnect 00:03:26.494 CC examples/blob/cli/blobcli.o 00:03:26.494 LINK iscsi_fuzz 00:03:26.752 LINK nvme_manage 00:03:26.752 CC examples/nvmf/nvmf/nvmf.o 00:03:26.752 CC test/app/jsoncat/jsoncat.o 00:03:27.009 LINK jsoncat 00:03:27.010 LINK nvmf 00:03:27.010 CC app/vhost/vhost.o 00:03:27.269 LINK blobcli 00:03:27.269 CC examples/nvme/arbitration/arbitration.o 00:03:27.269 LINK vhost 00:03:27.526 CC test/app/stub/stub.o 00:03:27.526 LINK arbitration 00:03:27.782 LINK stub 00:03:27.782 CC app/spdk_dd/spdk_dd.o 00:03:28.039 CC app/fio/nvme/fio_plugin.o 00:03:28.039 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:28.039 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:28.297 LINK spdk_dd 00:03:28.297 CC test/blobfs/mkfs/mkfs.o 00:03:28.555 LINK vhost_fuzz 00:03:28.813 LINK mkfs 00:03:28.813 LINK spdk_nvme 00:03:29.070 CC examples/nvme/hotplug/hotplug.o 00:03:29.328 TEST_HEADER include/spdk/accel.h 00:03:29.328 TEST_HEADER include/spdk/accel_module.h 00:03:29.328 TEST_HEADER include/spdk/assert.h 00:03:29.328 TEST_HEADER include/spdk/barrier.h 00:03:29.328 TEST_HEADER include/spdk/base64.h 00:03:29.328 TEST_HEADER include/spdk/bdev.h 00:03:29.328 TEST_HEADER include/spdk/bdev_module.h 00:03:29.328 TEST_HEADER include/spdk/bdev_zone.h 00:03:29.328 TEST_HEADER include/spdk/bit_array.h 00:03:29.328 TEST_HEADER include/spdk/bit_pool.h 00:03:29.328 TEST_HEADER include/spdk/blob.h 00:03:29.328 TEST_HEADER include/spdk/blob_bdev.h 00:03:29.328 TEST_HEADER include/spdk/blobfs.h 00:03:29.328 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:29.328 TEST_HEADER include/spdk/conf.h 00:03:29.328 CC examples/util/zipf/zipf.o 00:03:29.328 TEST_HEADER include/spdk/config.h 00:03:29.328 TEST_HEADER include/spdk/cpuset.h 00:03:29.328 TEST_HEADER include/spdk/crc16.h 00:03:29.328 TEST_HEADER include/spdk/crc32.h 00:03:29.328 TEST_HEADER include/spdk/crc64.h 00:03:29.328 TEST_HEADER include/spdk/dif.h 00:03:29.328 TEST_HEADER include/spdk/dma.h 00:03:29.328 TEST_HEADER include/spdk/endian.h 00:03:29.328 TEST_HEADER include/spdk/env.h 00:03:29.328 TEST_HEADER include/spdk/env_dpdk.h 00:03:29.328 TEST_HEADER include/spdk/event.h 00:03:29.328 TEST_HEADER include/spdk/fd.h 00:03:29.328 TEST_HEADER include/spdk/fd_group.h 00:03:29.328 TEST_HEADER include/spdk/file.h 00:03:29.328 TEST_HEADER include/spdk/ftl.h 00:03:29.328 TEST_HEADER include/spdk/gpt_spec.h 00:03:29.328 TEST_HEADER include/spdk/hexlify.h 00:03:29.328 TEST_HEADER include/spdk/histogram_data.h 00:03:29.328 TEST_HEADER include/spdk/idxd.h 00:03:29.328 TEST_HEADER include/spdk/idxd_spec.h 00:03:29.328 TEST_HEADER include/spdk/init.h 00:03:29.328 TEST_HEADER include/spdk/ioat.h 00:03:29.328 TEST_HEADER include/spdk/ioat_spec.h 00:03:29.328 TEST_HEADER include/spdk/iscsi_spec.h 00:03:29.328 TEST_HEADER include/spdk/json.h 00:03:29.328 TEST_HEADER include/spdk/jsonrpc.h 00:03:29.328 TEST_HEADER include/spdk/keyring.h 00:03:29.328 TEST_HEADER include/spdk/keyring_module.h 00:03:29.328 TEST_HEADER include/spdk/likely.h 00:03:29.328 TEST_HEADER include/spdk/log.h 00:03:29.328 TEST_HEADER include/spdk/lvol.h 00:03:29.328 TEST_HEADER include/spdk/memory.h 00:03:29.328 TEST_HEADER include/spdk/mmio.h 00:03:29.328 TEST_HEADER include/spdk/nbd.h 00:03:29.328 LINK hotplug 00:03:29.328 TEST_HEADER include/spdk/notify.h 00:03:29.328 TEST_HEADER include/spdk/nvme.h 00:03:29.328 TEST_HEADER include/spdk/nvme_intel.h 00:03:29.328 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:29.328 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:29.328 TEST_HEADER include/spdk/nvme_spec.h 00:03:29.328 TEST_HEADER include/spdk/nvme_zns.h 00:03:29.328 TEST_HEADER include/spdk/nvmf.h 00:03:29.328 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:29.328 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:29.328 TEST_HEADER include/spdk/nvmf_spec.h 00:03:29.328 TEST_HEADER include/spdk/nvmf_transport.h 00:03:29.586 TEST_HEADER include/spdk/opal.h 00:03:29.586 TEST_HEADER include/spdk/opal_spec.h 00:03:29.586 TEST_HEADER include/spdk/pci_ids.h 00:03:29.586 TEST_HEADER include/spdk/pipe.h 00:03:29.586 TEST_HEADER include/spdk/queue.h 00:03:29.586 TEST_HEADER include/spdk/reduce.h 00:03:29.586 TEST_HEADER include/spdk/rpc.h 00:03:29.586 TEST_HEADER include/spdk/scheduler.h 00:03:29.586 TEST_HEADER include/spdk/scsi.h 00:03:29.586 TEST_HEADER include/spdk/scsi_spec.h 00:03:29.586 TEST_HEADER include/spdk/sock.h 00:03:29.586 TEST_HEADER include/spdk/stdinc.h 00:03:29.586 TEST_HEADER include/spdk/string.h 00:03:29.586 TEST_HEADER include/spdk/thread.h 00:03:29.586 TEST_HEADER include/spdk/trace.h 00:03:29.586 TEST_HEADER include/spdk/trace_parser.h 00:03:29.586 TEST_HEADER include/spdk/tree.h 00:03:29.586 TEST_HEADER include/spdk/ublk.h 00:03:29.586 TEST_HEADER include/spdk/util.h 00:03:29.586 TEST_HEADER include/spdk/uuid.h 00:03:29.586 TEST_HEADER include/spdk/version.h 00:03:29.586 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:29.586 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:29.586 TEST_HEADER include/spdk/vhost.h 00:03:29.586 TEST_HEADER include/spdk/vmd.h 00:03:29.586 TEST_HEADER include/spdk/xor.h 00:03:29.586 TEST_HEADER include/spdk/zipf.h 00:03:29.586 CXX test/cpp_headers/accel.o 00:03:29.586 CXX test/cpp_headers/accel_module.o 00:03:29.843 LINK zipf 00:03:30.101 CXX test/cpp_headers/assert.o 00:03:30.101 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:30.101 CXX test/cpp_headers/barrier.o 00:03:30.361 LINK cmb_copy 00:03:30.361 CXX test/cpp_headers/base64.o 00:03:30.361 CC app/fio/bdev/fio_plugin.o 00:03:30.618 CXX test/cpp_headers/bdev.o 00:03:30.618 CC examples/nvme/abort/abort.o 00:03:30.618 CXX test/cpp_headers/bdev_module.o 00:03:30.618 CXX test/cpp_headers/bdev_zone.o 00:03:30.876 CC examples/thread/thread/thread_ex.o 00:03:30.876 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:30.876 CXX test/cpp_headers/bit_array.o 00:03:30.876 LINK spdk_bdev 00:03:31.134 CXX test/cpp_headers/bit_pool.o 00:03:31.134 LINK abort 00:03:31.134 LINK pmr_persistence 00:03:31.134 CC examples/idxd/perf/perf.o 00:03:31.134 LINK thread 00:03:31.134 CXX test/cpp_headers/blob.o 00:03:31.392 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:31.392 CXX test/cpp_headers/blob_bdev.o 00:03:31.649 LINK idxd_perf 00:03:31.649 CXX test/cpp_headers/blobfs.o 00:03:31.649 CXX test/cpp_headers/blobfs_bdev.o 00:03:31.649 LINK interrupt_tgt 00:03:31.649 CXX test/cpp_headers/conf.o 00:03:31.907 CXX test/cpp_headers/config.o 00:03:31.907 CXX test/cpp_headers/cpuset.o 00:03:31.907 CC test/dma/test_dma/test_dma.o 00:03:32.166 CXX test/cpp_headers/crc16.o 00:03:32.166 CC test/env/mem_callbacks/mem_callbacks.o 00:03:32.166 CXX test/cpp_headers/crc32.o 00:03:32.166 CC test/event/event_perf/event_perf.o 00:03:32.166 CC test/env/vtophys/vtophys.o 00:03:32.425 LINK test_dma 00:03:32.425 CXX test/cpp_headers/crc64.o 00:03:32.425 LINK vtophys 00:03:32.425 LINK event_perf 00:03:32.425 CC test/event/reactor/reactor.o 00:03:32.425 CC test/event/reactor_perf/reactor_perf.o 00:03:32.684 CXX test/cpp_headers/dif.o 00:03:32.684 LINK mem_callbacks 00:03:32.684 LINK reactor 00:03:32.684 LINK reactor_perf 00:03:32.684 CXX test/cpp_headers/dma.o 00:03:32.942 CXX test/cpp_headers/endian.o 00:03:32.942 CXX test/cpp_headers/env.o 00:03:33.200 CC test/event/app_repeat/app_repeat.o 00:03:33.200 CXX test/cpp_headers/env_dpdk.o 00:03:33.200 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:33.200 CXX test/cpp_headers/event.o 00:03:33.200 LINK app_repeat 00:03:33.460 LINK env_dpdk_post_init 00:03:33.460 CXX test/cpp_headers/fd.o 00:03:33.460 CC test/rpc_client/rpc_client_test.o 00:03:33.460 CC test/nvme/aer/aer.o 00:03:33.460 CC test/lvol/esnap/esnap.o 00:03:33.719 CXX test/cpp_headers/fd_group.o 00:03:33.719 LINK rpc_client_test 00:03:33.719 CC test/env/memory/memory_ut.o 00:03:33.719 CXX test/cpp_headers/file.o 00:03:33.982 LINK aer 00:03:33.982 CXX test/cpp_headers/ftl.o 00:03:34.240 CC test/env/pci/pci_ut.o 00:03:34.240 CC test/nvme/reset/reset.o 00:03:34.498 CXX test/cpp_headers/gpt_spec.o 00:03:34.498 CC test/nvme/sgl/sgl.o 00:03:34.498 LINK memory_ut 00:03:34.498 CXX test/cpp_headers/hexlify.o 00:03:34.498 CC test/event/scheduler/scheduler.o 00:03:34.498 CXX test/cpp_headers/histogram_data.o 00:03:34.757 LINK reset 00:03:34.757 LINK pci_ut 00:03:34.757 LINK sgl 00:03:34.757 CXX test/cpp_headers/idxd.o 00:03:35.015 LINK scheduler 00:03:35.015 CC test/thread/poller_perf/poller_perf.o 00:03:35.274 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:35.533 CC test/thread/lock/spdk_lock.o 00:03:35.533 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:35.791 CXX test/cpp_headers/idxd_spec.o 00:03:35.791 LINK poller_perf 00:03:35.791 LINK histogram_ut 00:03:36.049 CXX test/cpp_headers/init.o 00:03:36.049 CC test/nvme/e2edp/nvme_dp.o 00:03:36.049 CC test/nvme/overhead/overhead.o 00:03:36.049 CXX test/cpp_headers/ioat.o 00:03:36.049 CC test/nvme/err_injection/err_injection.o 00:03:36.049 CC test/nvme/startup/startup.o 00:03:36.308 CXX test/cpp_headers/ioat_spec.o 00:03:36.308 LINK nvme_dp 00:03:36.308 LINK err_injection 00:03:36.308 LINK startup 00:03:36.566 LINK overhead 00:03:36.566 CXX test/cpp_headers/iscsi_spec.o 00:03:36.566 CXX test/cpp_headers/json.o 00:03:36.825 CC test/nvme/reserve/reserve.o 00:03:36.825 CXX test/cpp_headers/jsonrpc.o 00:03:36.825 CXX test/cpp_headers/keyring.o 00:03:37.084 LINK reserve 00:03:37.084 CXX test/cpp_headers/keyring_module.o 00:03:37.342 CXX test/cpp_headers/likely.o 00:03:37.342 LINK spdk_lock 00:03:37.342 CXX test/cpp_headers/log.o 00:03:37.601 CXX test/cpp_headers/lvol.o 00:03:37.859 CC test/nvme/simple_copy/simple_copy.o 00:03:37.859 CXX test/cpp_headers/memory.o 00:03:37.859 CC test/nvme/connect_stress/connect_stress.o 00:03:37.859 CC test/nvme/boot_partition/boot_partition.o 00:03:37.859 CXX test/cpp_headers/mmio.o 00:03:38.117 CC test/nvme/compliance/nvme_compliance.o 00:03:38.117 LINK connect_stress 00:03:38.117 LINK simple_copy 00:03:38.117 LINK boot_partition 00:03:38.117 CXX test/cpp_headers/nbd.o 00:03:38.117 CXX test/cpp_headers/notify.o 00:03:38.117 CXX test/cpp_headers/nvme.o 00:03:38.374 CXX test/cpp_headers/nvme_intel.o 00:03:38.374 LINK accel_ut 00:03:38.374 CXX test/cpp_headers/nvme_ocssd.o 00:03:38.374 LINK nvme_compliance 00:03:38.374 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:38.374 CC test/nvme/fused_ordering/fused_ordering.o 00:03:38.631 CXX test/cpp_headers/nvme_spec.o 00:03:38.631 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:38.631 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:38.631 LINK fused_ordering 00:03:38.631 CXX test/cpp_headers/nvme_zns.o 00:03:38.919 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:38.919 CXX test/cpp_headers/nvmf.o 00:03:39.180 LINK scsi_nvme_ut 00:03:39.180 LINK esnap 00:03:39.180 CXX test/cpp_headers/nvmf_cmd.o 00:03:39.180 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:39.438 CXX test/cpp_headers/nvmf_spec.o 00:03:39.438 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:39.438 CXX test/cpp_headers/nvmf_transport.o 00:03:39.438 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:39.696 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:39.696 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:39.696 CXX test/cpp_headers/opal.o 00:03:39.696 LINK tree_ut 00:03:39.954 CXX test/cpp_headers/opal_spec.o 00:03:39.954 CC test/unit/lib/event/app.c/app_ut.o 00:03:39.954 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:39.954 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:39.954 CXX test/cpp_headers/pci_ids.o 00:03:39.954 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:39.954 LINK dma_ut 00:03:39.954 LINK blob_bdev_ut 00:03:40.211 LINK doorbell_aers 00:03:40.211 CXX test/cpp_headers/pipe.o 00:03:40.211 CXX test/cpp_headers/queue.o 00:03:40.470 LINK ioat_ut 00:03:40.470 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:40.470 CXX test/cpp_headers/reduce.o 00:03:40.470 LINK app_ut 00:03:40.470 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:40.727 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:40.727 CXX test/cpp_headers/rpc.o 00:03:40.727 CXX test/cpp_headers/scheduler.o 00:03:40.984 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:40.984 CXX test/cpp_headers/scsi.o 00:03:40.984 LINK init_grp_ut 00:03:41.242 CXX test/cpp_headers/scsi_spec.o 00:03:41.242 CXX test/cpp_headers/sock.o 00:03:41.242 LINK reactor_ut 00:03:41.242 LINK blobfs_async_ut 00:03:41.499 CXX test/cpp_headers/stdinc.o 00:03:41.499 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:41.499 CC test/nvme/fdp/fdp.o 00:03:41.499 CXX test/cpp_headers/string.o 00:03:41.756 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:41.756 CXX test/cpp_headers/thread.o 00:03:41.756 LINK conn_ut 00:03:42.014 LINK fdp 00:03:42.014 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:42.014 CXX test/cpp_headers/trace.o 00:03:42.312 CXX test/cpp_headers/trace_parser.o 00:03:42.312 CXX test/cpp_headers/tree.o 00:03:42.312 LINK part_ut 00:03:42.312 CXX test/cpp_headers/ublk.o 00:03:42.312 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:42.571 LINK json_util_ut 00:03:42.571 CXX test/cpp_headers/util.o 00:03:42.829 CXX test/cpp_headers/uuid.o 00:03:42.829 CC test/nvme/cuse/cuse.o 00:03:42.829 LINK gpt_ut 00:03:42.829 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:43.087 CXX test/cpp_headers/version.o 00:03:43.087 CXX test/cpp_headers/vfio_user_pci.o 00:03:43.087 CXX test/cpp_headers/vfio_user_spec.o 00:03:43.345 LINK blobfs_sync_ut 00:03:43.345 CXX test/cpp_headers/vhost.o 00:03:43.603 CXX test/cpp_headers/vmd.o 00:03:43.603 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:43.603 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:43.603 LINK iscsi_ut 00:03:43.603 CXX test/cpp_headers/xor.o 00:03:43.603 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:43.862 LINK param_ut 00:03:43.862 CXX test/cpp_headers/zipf.o 00:03:43.862 LINK cuse 00:03:43.862 LINK jsonrpc_server_ut 00:03:43.862 LINK blobfs_bdev_ut 00:03:44.119 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:44.119 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:44.119 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:44.375 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:44.375 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:44.375 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:44.632 LINK bdev_zone_ut 00:03:44.632 LINK vbdev_lvol_ut 00:03:44.633 LINK portal_grp_ut 00:03:44.633 LINK bdev_ut 00:03:44.889 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:45.147 LINK tgt_node_ut 00:03:45.147 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:45.147 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:45.147 LINK json_write_ut 00:03:45.147 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:45.147 LINK json_parse_ut 00:03:45.404 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:45.404 CC test/unit/lib/log/log.c/log_ut.o 00:03:45.661 LINK bdev_raid_sb_ut 00:03:45.661 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:45.661 LINK concat_ut 00:03:45.661 LINK vbdev_zone_block_ut 00:03:45.661 LINK raid1_ut 00:03:45.661 LINK log_ut 00:03:45.919 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:46.176 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:46.176 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:46.176 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:46.176 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:46.434 LINK notify_ut 00:03:46.693 LINK bdev_raid_ut 00:03:46.693 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:46.693 LINK dev_ut 00:03:47.258 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:47.258 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:47.258 LINK raid5f_ut 00:03:47.258 LINK lvol_ut 00:03:47.515 LINK blob_ut 00:03:47.515 LINK nvme_ut 00:03:47.515 LINK lun_ut 00:03:47.797 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:47.797 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:47.797 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:47.797 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:48.057 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:48.315 LINK scsi_ut 00:03:48.315 LINK bdev_ut 00:03:48.572 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:48.830 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:49.088 LINK ctrlr_bdev_ut 00:03:49.347 LINK scsi_pr_ut 00:03:49.347 LINK nvmf_ut 00:03:49.347 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:49.604 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:49.862 LINK scsi_bdev_ut 00:03:49.862 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:49.862 LINK ctrlr_discovery_ut 00:03:50.121 LINK subsystem_ut 00:03:50.121 LINK bdev_nvme_ut 00:03:50.121 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:50.379 LINK ctrlr_ut 00:03:50.379 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:50.638 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:50.638 LINK tcp_ut 00:03:50.638 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:50.896 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:50.896 LINK posix_ut 00:03:51.153 LINK sock_ut 00:03:51.153 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:51.153 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:51.410 LINK nvme_ctrlr_ut 00:03:51.410 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:51.669 LINK nvme_ns_ut 00:03:51.669 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:51.669 LINK nvme_ctrlr_cmd_ut 00:03:51.669 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:51.927 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:51.927 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:51.927 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:51.927 LINK iobuf_ut 00:03:52.184 LINK base64_ut 00:03:52.442 LINK bit_array_ut 00:03:52.442 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:52.442 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:52.700 LINK cpuset_ut 00:03:52.700 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:52.958 LINK pci_event_ut 00:03:52.958 LINK nvme_poll_group_ut 00:03:52.958 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:52.958 LINK nvme_ns_cmd_ut 00:03:52.958 LINK nvme_ns_ocssd_cmd_ut 00:03:53.242 LINK crc16_ut 00:03:53.242 LINK thread_ut 00:03:53.242 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:53.242 LINK nvme_pcie_ut 00:03:53.242 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:53.242 LINK subsystem_ut 00:03:53.500 LINK crc32_ieee_ut 00:03:53.500 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:53.500 LINK crc32c_ut 00:03:53.500 LINK crc64_ut 00:03:53.500 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:53.500 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:53.758 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:53.758 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:53.758 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:53.758 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:53.758 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:54.016 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:54.016 LINK rdma_ut 00:03:54.016 LINK keyring_ut 00:03:54.016 LINK transport_ut 00:03:54.290 LINK rpc_ut 00:03:54.549 LINK nvme_quirks_ut 00:03:54.549 LINK iov_ut 00:03:54.549 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:54.549 LINK rpc_ut 00:03:54.549 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:54.806 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:54.806 CC test/unit/lib/util/math.c/math_ut.o 00:03:55.064 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:55.064 LINK nvme_qpair_ut 00:03:55.064 LINK dif_ut 00:03:55.064 LINK idxd_user_ut 00:03:55.064 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:55.064 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:55.321 LINK math_ut 00:03:55.321 CC test/unit/lib/util/string.c/string_ut.o 00:03:55.321 LINK pipe_ut 00:03:55.321 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:55.579 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:55.579 LINK idxd_ut 00:03:55.579 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:55.579 LINK common_ut 00:03:55.579 LINK ftl_l2p_ut 00:03:55.837 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:55.837 LINK string_ut 00:03:55.837 LINK xor_ut 00:03:55.837 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:56.096 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:56.096 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:56.096 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:56.354 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:56.354 LINK nvme_io_msg_ut 00:03:56.354 LINK ftl_bitmap_ut 00:03:56.354 LINK nvme_transport_ut 00:03:56.612 LINK nvme_tcp_ut 00:03:56.612 LINK ftl_io_ut 00:03:56.612 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:56.870 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:56.870 LINK ftl_mempool_ut 00:03:56.870 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:57.128 LINK nvme_pcie_common_ut 00:03:57.128 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:57.128 LINK nvme_fabric_ut 00:03:57.386 LINK vhost_ut 00:03:57.386 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:57.386 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:57.644 LINK ftl_band_ut 00:03:57.644 LINK ftl_mngt_ut 00:03:57.901 LINK nvme_opal_ut 00:03:58.466 LINK ftl_sb_ut 00:03:58.724 LINK ftl_layout_upgrade_ut 00:03:59.288 LINK nvme_cuse_ut 00:03:59.854 LINK nvme_rdma_ut 00:04:00.113 00:04:00.113 real 2m7.941s 00:04:00.113 user 10m28.491s 00:04:00.113 sys 2m29.523s 00:04:00.113 00:17:53 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:04:00.113 00:17:53 -- common/autotest_common.sh@10 -- $ set +x 00:04:00.113 ************************************ 00:04:00.113 END TEST unittest_build 00:04:00.113 ************************************ 00:04:00.113 00:17:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:00.113 00:17:53 -- pm/common@30 -- $ signal_monitor_resources TERM 00:04:00.113 00:17:53 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:04:00.113 00:17:53 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:00.113 00:17:53 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:00.113 00:17:53 -- pm/common@45 -- $ pid=2137 00:04:00.113 00:17:53 -- pm/common@52 -- $ sudo kill -TERM 2137 00:04:00.113 00:17:53 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:00.113 00:17:53 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:00.113 00:17:53 -- pm/common@45 -- $ pid=2138 00:04:00.113 00:17:53 -- pm/common@52 -- $ sudo kill -TERM 2138 00:04:00.113 00:17:53 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:00.113 00:17:53 -- nvmf/common.sh@7 -- # uname -s 00:04:00.113 00:17:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:00.113 00:17:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:00.113 00:17:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:00.113 00:17:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:00.113 00:17:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:00.113 00:17:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:00.113 00:17:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:00.113 00:17:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:00.113 00:17:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:00.113 00:17:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:00.113 00:17:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:304feff0-032b-473a-9663-2435a5f70c5b 00:04:00.113 00:17:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=304feff0-032b-473a-9663-2435a5f70c5b 00:04:00.113 00:17:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:00.113 00:17:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:00.113 00:17:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:00.113 00:17:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:00.113 00:17:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:00.113 00:17:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:00.113 00:17:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:00.113 00:17:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:00.113 00:17:53 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:00.113 00:17:53 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:00.113 00:17:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:00.113 00:17:53 -- paths/export.sh@5 -- # export PATH 00:04:00.113 00:17:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:00.113 00:17:53 -- nvmf/common.sh@47 -- # : 0 00:04:00.113 00:17:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:00.113 00:17:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:00.113 00:17:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:00.113 00:17:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:00.113 00:17:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:00.113 00:17:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:00.113 00:17:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:00.113 00:17:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:00.113 00:17:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:00.113 00:17:53 -- spdk/autotest.sh@32 -- # uname -s 00:04:00.113 00:17:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:00.113 00:17:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:00.113 00:17:53 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:00.113 00:17:53 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:00.113 00:17:53 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:00.113 00:17:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:00.372 00:17:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:00.372 00:17:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:00.372 00:17:53 -- spdk/autotest.sh@48 -- # udevadm_pid=99416 00:04:00.372 00:17:53 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:00.372 00:17:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:00.372 00:17:53 -- pm/common@17 -- # local monitor 00:04:00.372 00:17:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:00.372 00:17:53 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=99420 00:04:00.372 00:17:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:00.372 00:17:53 -- pm/common@21 -- # date +%s 00:04:00.372 00:17:53 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=99425 00:04:00.372 00:17:53 -- pm/common@26 -- # sleep 1 00:04:00.372 00:17:53 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713917873 00:04:00.372 00:17:53 -- pm/common@21 -- # date +%s 00:04:00.372 00:17:53 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713917873 00:04:00.372 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713917873_collect-cpu-load.pm.log 00:04:00.372 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713917873_collect-vmstat.pm.log 00:04:01.308 00:17:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:01.308 00:17:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:01.308 00:17:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:01.308 00:17:54 -- common/autotest_common.sh@10 -- # set +x 00:04:01.308 00:17:54 -- spdk/autotest.sh@59 -- # create_test_list 00:04:01.308 00:17:54 -- common/autotest_common.sh@734 -- # xtrace_disable 00:04:01.308 00:17:54 -- common/autotest_common.sh@10 -- # set +x 00:04:01.308 00:17:54 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:01.308 00:17:54 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:01.308 00:17:54 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:01.308 00:17:54 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:01.308 00:17:54 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:01.308 00:17:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:01.308 00:17:54 -- common/autotest_common.sh@1441 -- # uname 00:04:01.308 00:17:54 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:04:01.308 00:17:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:01.308 00:17:54 -- common/autotest_common.sh@1461 -- # uname 00:04:01.308 00:17:54 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:04:01.308 00:17:54 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:01.308 00:17:54 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:01.308 00:17:54 -- spdk/autotest.sh@72 -- # hash lcov 00:04:01.308 00:17:54 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:01.308 00:17:54 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:01.308 --rc lcov_branch_coverage=1 00:04:01.308 --rc lcov_function_coverage=1 00:04:01.308 --rc genhtml_branch_coverage=1 00:04:01.308 --rc genhtml_function_coverage=1 00:04:01.308 --rc genhtml_legend=1 00:04:01.308 --rc geninfo_all_blocks=1 00:04:01.308 ' 00:04:01.308 00:17:54 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:01.308 --rc lcov_branch_coverage=1 00:04:01.308 --rc lcov_function_coverage=1 00:04:01.308 --rc genhtml_branch_coverage=1 00:04:01.308 --rc genhtml_function_coverage=1 00:04:01.308 --rc genhtml_legend=1 00:04:01.308 --rc geninfo_all_blocks=1 00:04:01.308 ' 00:04:01.308 00:17:54 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:01.308 --rc lcov_branch_coverage=1 00:04:01.308 --rc lcov_function_coverage=1 00:04:01.308 --rc genhtml_branch_coverage=1 00:04:01.308 --rc genhtml_function_coverage=1 00:04:01.308 --rc genhtml_legend=1 00:04:01.308 --rc geninfo_all_blocks=1 00:04:01.308 --no-external' 00:04:01.308 00:17:54 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:01.308 --rc lcov_branch_coverage=1 00:04:01.308 --rc lcov_function_coverage=1 00:04:01.308 --rc genhtml_branch_coverage=1 00:04:01.308 --rc genhtml_function_coverage=1 00:04:01.308 --rc genhtml_legend=1 00:04:01.308 --rc geninfo_all_blocks=1 00:04:01.308 --no-external' 00:04:01.308 00:17:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:01.308 lcov: LCOV version 1.15 00:04:01.308 00:17:55 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:07.867 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:07.867 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:20.084 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:20.085 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:20.085 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:20.085 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:20.085 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:20.085 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:52.202 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:52.202 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:52.202 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:52.202 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:52.202 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:52.202 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:52.202 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:52.202 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:52.202 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:52.202 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:52.202 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:52.202 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:52.202 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:52.202 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:52.202 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:52.202 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:52.202 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:52.202 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:52.203 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:52.203 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:52.203 00:18:45 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:52.203 00:18:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:52.203 00:18:45 -- common/autotest_common.sh@10 -- # set +x 00:04:52.203 00:18:45 -- spdk/autotest.sh@91 -- # rm -f 00:04:52.203 00:18:45 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:52.462 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:52.462 00:18:46 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:52.462 00:18:46 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:52.462 00:18:46 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:52.462 00:18:46 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:52.462 00:18:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:52.462 00:18:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:52.462 00:18:46 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:52.462 00:18:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.462 00:18:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:52.462 00:18:46 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:52.462 00:18:46 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:52.462 00:18:46 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:52.462 00:18:46 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:52.462 00:18:46 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:52.462 00:18:46 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:52.462 No valid GPT data, bailing 00:04:52.462 00:18:46 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:52.462 00:18:46 -- scripts/common.sh@391 -- # pt= 00:04:52.462 00:18:46 -- scripts/common.sh@392 -- # return 1 00:04:52.462 00:18:46 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:52.462 1+0 records in 00:04:52.462 1+0 records out 00:04:52.462 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051947 s, 202 MB/s 00:04:52.462 00:18:46 -- spdk/autotest.sh@118 -- # sync 00:04:52.462 00:18:46 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:52.462 00:18:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:52.462 00:18:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:54.363 00:18:47 -- spdk/autotest.sh@124 -- # uname -s 00:04:54.363 00:18:47 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:54.363 00:18:47 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:54.363 00:18:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.363 00:18:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.363 00:18:47 -- common/autotest_common.sh@10 -- # set +x 00:04:54.363 ************************************ 00:04:54.363 START TEST setup.sh 00:04:54.363 ************************************ 00:04:54.363 00:18:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:54.363 * Looking for test storage... 00:04:54.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:54.363 00:18:47 -- setup/test-setup.sh@10 -- # uname -s 00:04:54.363 00:18:48 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:54.363 00:18:48 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:54.363 00:18:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.363 00:18:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.363 00:18:48 -- common/autotest_common.sh@10 -- # set +x 00:04:54.363 ************************************ 00:04:54.363 START TEST acl 00:04:54.363 ************************************ 00:04:54.363 00:18:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:54.620 * Looking for test storage... 00:04:54.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:54.620 00:18:48 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:54.620 00:18:48 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:54.620 00:18:48 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:54.620 00:18:48 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:54.620 00:18:48 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:54.620 00:18:48 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:54.620 00:18:48 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:54.620 00:18:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.620 00:18:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:54.620 00:18:48 -- setup/acl.sh@12 -- # devs=() 00:04:54.620 00:18:48 -- setup/acl.sh@12 -- # declare -a devs 00:04:54.620 00:18:48 -- setup/acl.sh@13 -- # drivers=() 00:04:54.620 00:18:48 -- setup/acl.sh@13 -- # declare -A drivers 00:04:54.620 00:18:48 -- setup/acl.sh@51 -- # setup reset 00:04:54.620 00:18:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.620 00:18:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.184 00:18:48 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:55.184 00:18:48 -- setup/acl.sh@16 -- # local dev driver 00:04:55.184 00:18:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.184 00:18:48 -- setup/acl.sh@15 -- # setup output status 00:04:55.184 00:18:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.184 00:18:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:55.442 00:18:49 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:55.442 00:18:49 -- setup/acl.sh@19 -- # continue 00:04:55.442 00:18:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 Hugepages 00:04:55.442 node hugesize free / total 00:04:55.442 00:18:49 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:55.442 00:18:49 -- setup/acl.sh@19 -- # continue 00:04:55.442 00:18:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.442 00:04:55.442 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.442 00:18:49 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:55.442 00:18:49 -- setup/acl.sh@19 -- # continue 00:04:55.442 00:18:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.700 00:18:49 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:55.700 00:18:49 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:55.700 00:18:49 -- setup/acl.sh@20 -- # continue 00:04:55.700 00:18:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.700 00:18:49 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:55.700 00:18:49 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:55.700 00:18:49 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:55.700 00:18:49 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:55.700 00:18:49 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:55.700 00:18:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:55.700 00:18:49 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:55.700 00:18:49 -- setup/acl.sh@54 -- # run_test denied denied 00:04:55.700 00:18:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.700 00:18:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.700 00:18:49 -- common/autotest_common.sh@10 -- # set +x 00:04:55.700 ************************************ 00:04:55.700 START TEST denied 00:04:55.700 ************************************ 00:04:55.700 00:18:49 -- common/autotest_common.sh@1111 -- # denied 00:04:55.700 00:18:49 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:55.700 00:18:49 -- setup/acl.sh@38 -- # setup output config 00:04:55.700 00:18:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.700 00:18:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.700 00:18:49 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:57.143 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:57.143 00:18:50 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:57.143 00:18:50 -- setup/acl.sh@28 -- # local dev driver 00:04:57.143 00:18:50 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:57.143 00:18:50 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:57.143 00:18:50 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:57.143 00:18:50 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:57.143 00:18:50 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:57.143 00:18:50 -- setup/acl.sh@41 -- # setup reset 00:04:57.143 00:18:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.143 00:18:50 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.709 00:04:57.709 real 0m1.920s 00:04:57.709 user 0m0.484s 00:04:57.709 sys 0m1.512s 00:04:57.709 00:18:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:57.709 00:18:51 -- common/autotest_common.sh@10 -- # set +x 00:04:57.709 ************************************ 00:04:57.709 END TEST denied 00:04:57.709 ************************************ 00:04:57.709 00:18:51 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:57.709 00:18:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.709 00:18:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.709 00:18:51 -- common/autotest_common.sh@10 -- # set +x 00:04:57.709 ************************************ 00:04:57.709 START TEST allowed 00:04:57.709 ************************************ 00:04:57.709 00:18:51 -- common/autotest_common.sh@1111 -- # allowed 00:04:57.709 00:18:51 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:57.709 00:18:51 -- setup/acl.sh@45 -- # setup output config 00:04:57.709 00:18:51 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:57.709 00:18:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.709 00:18:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:59.610 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.610 00:18:53 -- setup/acl.sh@47 -- # verify 00:04:59.610 00:18:53 -- setup/acl.sh@28 -- # local dev driver 00:04:59.610 00:18:53 -- setup/acl.sh@48 -- # setup reset 00:04:59.610 00:18:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.610 00:18:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:59.868 00:04:59.868 real 0m2.135s 00:04:59.868 user 0m0.505s 00:04:59.868 sys 0m1.632s 00:04:59.868 00:18:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.868 ************************************ 00:04:59.868 END TEST allowed 00:04:59.868 ************************************ 00:04:59.868 00:18:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.868 00:04:59.868 real 0m5.579s 00:04:59.868 user 0m1.654s 00:04:59.868 sys 0m4.092s 00:04:59.868 00:18:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.868 00:18:53 -- common/autotest_common.sh@10 -- # set +x 00:04:59.868 ************************************ 00:04:59.868 END TEST acl 00:04:59.868 ************************************ 00:05:00.128 00:18:53 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:00.128 00:18:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.128 00:18:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.128 00:18:53 -- common/autotest_common.sh@10 -- # set +x 00:05:00.128 ************************************ 00:05:00.128 START TEST hugepages 00:05:00.128 ************************************ 00:05:00.128 00:18:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:00.128 * Looking for test storage... 00:05:00.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:00.128 00:18:53 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:00.128 00:18:53 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:00.128 00:18:53 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:00.128 00:18:53 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:00.128 00:18:53 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:00.128 00:18:53 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:00.128 00:18:53 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:00.128 00:18:53 -- setup/common.sh@18 -- # local node= 00:05:00.128 00:18:53 -- setup/common.sh@19 -- # local var val 00:05:00.128 00:18:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.128 00:18:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.128 00:18:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.128 00:18:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.128 00:18:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.128 00:18:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 2884376 kB' 'MemAvailable: 7398528 kB' 'Buffers: 35420 kB' 'Cached: 4614844 kB' 'SwapCached: 0 kB' 'Active: 1011888 kB' 'Inactive: 3758888 kB' 'Active(anon): 1032 kB' 'Inactive(anon): 131140 kB' 'Active(file): 1010856 kB' 'Inactive(file): 3627748 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 476 kB' 'Writeback: 0 kB' 'AnonPages: 149552 kB' 'Mapped: 67992 kB' 'Shmem: 2596 kB' 'KReclaimable: 196852 kB' 'Slab: 262424 kB' 'SReclaimable: 196852 kB' 'SUnreclaim: 65572 kB' 'KernelStack: 4508 kB' 'PageTables: 3600 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024336 kB' 'Committed_AS: 508308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.128 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.128 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # continue 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.129 00:18:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.129 00:18:53 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:00.129 00:18:53 -- setup/common.sh@33 -- # echo 2048 00:05:00.129 00:18:53 -- setup/common.sh@33 -- # return 0 00:05:00.129 00:18:53 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:00.129 00:18:53 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:00.129 00:18:53 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:00.129 00:18:53 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:00.129 00:18:53 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:00.129 00:18:53 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:00.129 00:18:53 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:00.129 00:18:53 -- setup/hugepages.sh@207 -- # get_nodes 00:05:00.129 00:18:53 -- setup/hugepages.sh@27 -- # local node 00:05:00.129 00:18:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.129 00:18:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:00.129 00:18:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.129 00:18:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.129 00:18:53 -- setup/hugepages.sh@208 -- # clear_hp 00:05:00.129 00:18:53 -- setup/hugepages.sh@37 -- # local node hp 00:05:00.129 00:18:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.129 00:18:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.129 00:18:53 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.129 00:18:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.129 00:18:53 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.129 00:18:53 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:00.129 00:18:53 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:00.129 00:18:53 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:00.129 00:18:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.129 00:18:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.129 00:18:53 -- common/autotest_common.sh@10 -- # set +x 00:05:00.387 ************************************ 00:05:00.387 START TEST default_setup 00:05:00.387 ************************************ 00:05:00.387 00:18:53 -- common/autotest_common.sh@1111 -- # default_setup 00:05:00.387 00:18:53 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:00.387 00:18:53 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:00.387 00:18:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.387 00:18:53 -- setup/hugepages.sh@51 -- # shift 00:05:00.387 00:18:53 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:00.387 00:18:53 -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.387 00:18:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.387 00:18:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:00.387 00:18:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.387 00:18:53 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:00.387 00:18:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.387 00:18:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:00.387 00:18:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:00.387 00:18:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.387 00:18:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.387 00:18:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.387 00:18:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.387 00:18:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:00.387 00:18:53 -- setup/hugepages.sh@73 -- # return 0 00:05:00.387 00:18:53 -- setup/hugepages.sh@137 -- # setup output 00:05:00.387 00:18:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.387 00:18:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:00.900 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.470 00:18:54 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:01.470 00:18:54 -- setup/hugepages.sh@89 -- # local node 00:05:01.470 00:18:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.470 00:18:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.470 00:18:54 -- setup/hugepages.sh@92 -- # local surp 00:05:01.470 00:18:54 -- setup/hugepages.sh@93 -- # local resv 00:05:01.470 00:18:54 -- setup/hugepages.sh@94 -- # local anon 00:05:01.470 00:18:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.470 00:18:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.470 00:18:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.470 00:18:54 -- setup/common.sh@18 -- # local node= 00:05:01.470 00:18:54 -- setup/common.sh@19 -- # local var val 00:05:01.470 00:18:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.470 00:18:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.470 00:18:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.470 00:18:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.470 00:18:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.470 00:18:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.470 00:18:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4961912 kB' 'MemAvailable: 9476192 kB' 'Buffers: 35420 kB' 'Cached: 4614856 kB' 'SwapCached: 0 kB' 'Active: 1011880 kB' 'Inactive: 3774972 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 147184 kB' 'Active(file): 1010828 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 524 kB' 'Writeback: 0 kB' 'AnonPages: 165876 kB' 'Mapped: 68340 kB' 'Shmem: 2596 kB' 'KReclaimable: 196968 kB' 'Slab: 263232 kB' 'SReclaimable: 196968 kB' 'SUnreclaim: 66264 kB' 'KernelStack: 4452 kB' 'PageTables: 3696 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 523600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # continue 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # continue 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # continue 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # continue 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # continue 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # continue 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # continue 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # continue 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # continue 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # continue 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # continue 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.470 00:18:54 -- setup/common.sh@32 -- # continue 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.470 00:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:54 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.471 00:18:55 -- setup/common.sh@33 -- # echo 0 00:05:01.471 00:18:55 -- setup/common.sh@33 -- # return 0 00:05:01.471 00:18:55 -- setup/hugepages.sh@97 -- # anon=0 00:05:01.471 00:18:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.471 00:18:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.471 00:18:55 -- setup/common.sh@18 -- # local node= 00:05:01.471 00:18:55 -- setup/common.sh@19 -- # local var val 00:05:01.471 00:18:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.471 00:18:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.471 00:18:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.471 00:18:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.471 00:18:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.471 00:18:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4961912 kB' 'MemAvailable: 9476192 kB' 'Buffers: 35420 kB' 'Cached: 4614856 kB' 'SwapCached: 0 kB' 'Active: 1011880 kB' 'Inactive: 3774712 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 146924 kB' 'Active(file): 1010828 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 524 kB' 'Writeback: 0 kB' 'AnonPages: 165616 kB' 'Mapped: 68340 kB' 'Shmem: 2596 kB' 'KReclaimable: 196968 kB' 'Slab: 263232 kB' 'SReclaimable: 196968 kB' 'SUnreclaim: 66264 kB' 'KernelStack: 4452 kB' 'PageTables: 3696 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 523600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.471 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.471 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.472 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.472 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.472 00:18:55 -- setup/common.sh@33 -- # echo 0 00:05:01.472 00:18:55 -- setup/common.sh@33 -- # return 0 00:05:01.472 00:18:55 -- setup/hugepages.sh@99 -- # surp=0 00:05:01.472 00:18:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.472 00:18:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.472 00:18:55 -- setup/common.sh@18 -- # local node= 00:05:01.472 00:18:55 -- setup/common.sh@19 -- # local var val 00:05:01.472 00:18:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.472 00:18:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.472 00:18:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.472 00:18:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.472 00:18:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.472 00:18:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4962404 kB' 'MemAvailable: 9476688 kB' 'Buffers: 35420 kB' 'Cached: 4614860 kB' 'SwapCached: 0 kB' 'Active: 1011880 kB' 'Inactive: 3774552 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 146760 kB' 'Active(file): 1010828 kB' 'Inactive(file): 3627792 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 524 kB' 'Writeback: 0 kB' 'AnonPages: 165192 kB' 'Mapped: 68132 kB' 'Shmem: 2596 kB' 'KReclaimable: 196968 kB' 'Slab: 263136 kB' 'SReclaimable: 196968 kB' 'SUnreclaim: 66168 kB' 'KernelStack: 4400 kB' 'PageTables: 3732 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 523600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.473 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.473 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.474 00:18:55 -- setup/common.sh@33 -- # echo 0 00:05:01.474 00:18:55 -- setup/common.sh@33 -- # return 0 00:05:01.474 00:18:55 -- setup/hugepages.sh@100 -- # resv=0 00:05:01.474 nr_hugepages=1024 00:05:01.474 00:18:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.474 resv_hugepages=0 00:05:01.474 00:18:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.474 surplus_hugepages=0 00:05:01.474 00:18:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.474 anon_hugepages=0 00:05:01.474 00:18:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.474 00:18:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.474 00:18:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.474 00:18:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.474 00:18:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.474 00:18:55 -- setup/common.sh@18 -- # local node= 00:05:01.474 00:18:55 -- setup/common.sh@19 -- # local var val 00:05:01.474 00:18:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.474 00:18:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.474 00:18:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.474 00:18:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.474 00:18:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.474 00:18:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4962404 kB' 'MemAvailable: 9476688 kB' 'Buffers: 35420 kB' 'Cached: 4614860 kB' 'SwapCached: 0 kB' 'Active: 1011880 kB' 'Inactive: 3774500 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 146708 kB' 'Active(file): 1010828 kB' 'Inactive(file): 3627792 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 524 kB' 'Writeback: 0 kB' 'AnonPages: 165360 kB' 'Mapped: 68132 kB' 'Shmem: 2596 kB' 'KReclaimable: 196968 kB' 'Slab: 263140 kB' 'SReclaimable: 196968 kB' 'SUnreclaim: 66172 kB' 'KernelStack: 4420 kB' 'PageTables: 3612 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 523600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19628 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.474 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.474 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.475 00:18:55 -- setup/common.sh@33 -- # echo 1024 00:05:01.475 00:18:55 -- setup/common.sh@33 -- # return 0 00:05:01.475 00:18:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.475 00:18:55 -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.475 00:18:55 -- setup/hugepages.sh@27 -- # local node 00:05:01.475 00:18:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.475 00:18:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:01.475 00:18:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:01.475 00:18:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.475 00:18:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.475 00:18:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.475 00:18:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.475 00:18:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.475 00:18:55 -- setup/common.sh@18 -- # local node=0 00:05:01.475 00:18:55 -- setup/common.sh@19 -- # local var val 00:05:01.475 00:18:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.475 00:18:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.475 00:18:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.475 00:18:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.475 00:18:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.475 00:18:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4962656 kB' 'MemUsed: 7280324 kB' 'SwapCached: 0 kB' 'Active: 1011872 kB' 'Inactive: 3774240 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 146448 kB' 'Active(file): 1010828 kB' 'Inactive(file): 3627792 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 524 kB' 'Writeback: 0 kB' 'FilePages: 4650280 kB' 'Mapped: 68124 kB' 'AnonPages: 165016 kB' 'Shmem: 2596 kB' 'KernelStack: 4356 kB' 'PageTables: 3704 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 196968 kB' 'Slab: 263140 kB' 'SReclaimable: 196968 kB' 'SUnreclaim: 66172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.475 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.475 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # continue 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.476 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.476 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.476 00:18:55 -- setup/common.sh@33 -- # echo 0 00:05:01.476 00:18:55 -- setup/common.sh@33 -- # return 0 00:05:01.476 00:18:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.476 00:18:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.476 00:18:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.476 00:18:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.476 node0=1024 expecting 1024 00:05:01.476 00:18:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:01.476 00:18:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:01.476 00:05:01.476 real 0m1.211s 00:05:01.476 user 0m0.376s 00:05:01.476 sys 0m0.830s 00:05:01.476 00:18:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.476 00:18:55 -- common/autotest_common.sh@10 -- # set +x 00:05:01.476 ************************************ 00:05:01.476 END TEST default_setup 00:05:01.476 ************************************ 00:05:01.476 00:18:55 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:01.476 00:18:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.476 00:18:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.476 00:18:55 -- common/autotest_common.sh@10 -- # set +x 00:05:01.476 ************************************ 00:05:01.476 START TEST per_node_1G_alloc 00:05:01.476 ************************************ 00:05:01.734 00:18:55 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:05:01.734 00:18:55 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:01.734 00:18:55 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:01.734 00:18:55 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:01.734 00:18:55 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:01.734 00:18:55 -- setup/hugepages.sh@51 -- # shift 00:05:01.734 00:18:55 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:01.734 00:18:55 -- setup/hugepages.sh@52 -- # local node_ids 00:05:01.734 00:18:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.734 00:18:55 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:01.734 00:18:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:01.734 00:18:55 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:01.734 00:18:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.734 00:18:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:01.734 00:18:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:01.734 00:18:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.734 00:18:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.734 00:18:55 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:01.734 00:18:55 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:01.734 00:18:55 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:01.734 00:18:55 -- setup/hugepages.sh@73 -- # return 0 00:05:01.734 00:18:55 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:01.734 00:18:55 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:01.734 00:18:55 -- setup/hugepages.sh@146 -- # setup output 00:05:01.734 00:18:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.734 00:18:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:01.993 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.254 00:18:55 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:02.254 00:18:55 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:02.254 00:18:55 -- setup/hugepages.sh@89 -- # local node 00:05:02.254 00:18:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.254 00:18:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.254 00:18:55 -- setup/hugepages.sh@92 -- # local surp 00:05:02.254 00:18:55 -- setup/hugepages.sh@93 -- # local resv 00:05:02.254 00:18:55 -- setup/hugepages.sh@94 -- # local anon 00:05:02.254 00:18:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.254 00:18:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.254 00:18:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.254 00:18:55 -- setup/common.sh@18 -- # local node= 00:05:02.254 00:18:55 -- setup/common.sh@19 -- # local var val 00:05:02.254 00:18:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.254 00:18:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.254 00:18:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.254 00:18:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.254 00:18:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.254 00:18:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6011168 kB' 'MemAvailable: 10525484 kB' 'Buffers: 35420 kB' 'Cached: 4614860 kB' 'SwapCached: 0 kB' 'Active: 1011880 kB' 'Inactive: 3774984 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 147192 kB' 'Active(file): 1010828 kB' 'Inactive(file): 3627792 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 564 kB' 'Writeback: 0 kB' 'AnonPages: 165852 kB' 'Mapped: 68088 kB' 'Shmem: 2596 kB' 'KReclaimable: 197000 kB' 'Slab: 262788 kB' 'SReclaimable: 197000 kB' 'SUnreclaim: 65788 kB' 'KernelStack: 4432 kB' 'PageTables: 3824 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 526928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.254 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.254 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.255 00:18:55 -- setup/common.sh@33 -- # echo 0 00:05:02.255 00:18:55 -- setup/common.sh@33 -- # return 0 00:05:02.255 00:18:55 -- setup/hugepages.sh@97 -- # anon=0 00:05:02.255 00:18:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.255 00:18:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.255 00:18:55 -- setup/common.sh@18 -- # local node= 00:05:02.255 00:18:55 -- setup/common.sh@19 -- # local var val 00:05:02.255 00:18:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.255 00:18:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.255 00:18:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.255 00:18:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.255 00:18:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.255 00:18:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6011956 kB' 'MemAvailable: 10526272 kB' 'Buffers: 35420 kB' 'Cached: 4614860 kB' 'SwapCached: 0 kB' 'Active: 1011880 kB' 'Inactive: 3774724 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 146932 kB' 'Active(file): 1010828 kB' 'Inactive(file): 3627792 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 564 kB' 'Writeback: 0 kB' 'AnonPages: 165332 kB' 'Mapped: 68088 kB' 'Shmem: 2596 kB' 'KReclaimable: 197000 kB' 'Slab: 262788 kB' 'SReclaimable: 197000 kB' 'SUnreclaim: 65788 kB' 'KernelStack: 4432 kB' 'PageTables: 3824 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 523600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.255 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.255 00:18:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.256 00:18:55 -- setup/common.sh@33 -- # echo 0 00:05:02.256 00:18:55 -- setup/common.sh@33 -- # return 0 00:05:02.256 00:18:55 -- setup/hugepages.sh@99 -- # surp=0 00:05:02.256 00:18:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.256 00:18:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.256 00:18:55 -- setup/common.sh@18 -- # local node= 00:05:02.256 00:18:55 -- setup/common.sh@19 -- # local var val 00:05:02.256 00:18:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.256 00:18:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.256 00:18:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.256 00:18:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.256 00:18:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.256 00:18:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6012208 kB' 'MemAvailable: 10526524 kB' 'Buffers: 35420 kB' 'Cached: 4614860 kB' 'SwapCached: 0 kB' 'Active: 1011880 kB' 'Inactive: 3774516 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146728 kB' 'Active(file): 1010832 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 564 kB' 'Writeback: 0 kB' 'AnonPages: 165384 kB' 'Mapped: 68088 kB' 'Shmem: 2596 kB' 'KReclaimable: 197000 kB' 'Slab: 262788 kB' 'SReclaimable: 197000 kB' 'SUnreclaim: 65788 kB' 'KernelStack: 4384 kB' 'PageTables: 3704 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 523600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.256 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.256 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.257 00:18:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.257 00:18:55 -- setup/common.sh@33 -- # echo 0 00:05:02.257 00:18:55 -- setup/common.sh@33 -- # return 0 00:05:02.257 00:18:55 -- setup/hugepages.sh@100 -- # resv=0 00:05:02.257 nr_hugepages=512 00:05:02.257 00:18:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:02.257 resv_hugepages=0 00:05:02.257 00:18:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.257 surplus_hugepages=0 00:05:02.257 00:18:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.257 anon_hugepages=0 00:05:02.257 00:18:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.257 00:18:55 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:02.257 00:18:55 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:02.257 00:18:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.257 00:18:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.257 00:18:55 -- setup/common.sh@18 -- # local node= 00:05:02.257 00:18:55 -- setup/common.sh@19 -- # local var val 00:05:02.257 00:18:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.257 00:18:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.257 00:18:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.257 00:18:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.257 00:18:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.257 00:18:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.257 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6012932 kB' 'MemAvailable: 10527248 kB' 'Buffers: 35420 kB' 'Cached: 4614860 kB' 'SwapCached: 0 kB' 'Active: 1011876 kB' 'Inactive: 3774332 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 146544 kB' 'Active(file): 1010832 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 564 kB' 'Writeback: 0 kB' 'AnonPages: 165120 kB' 'Mapped: 68076 kB' 'Shmem: 2596 kB' 'KReclaimable: 197000 kB' 'Slab: 262796 kB' 'SReclaimable: 197000 kB' 'SUnreclaim: 65796 kB' 'KernelStack: 4404 kB' 'PageTables: 3820 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 523600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.258 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.258 00:18:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.259 00:18:56 -- setup/common.sh@33 -- # echo 512 00:05:02.259 00:18:56 -- setup/common.sh@33 -- # return 0 00:05:02.259 00:18:56 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:02.259 00:18:56 -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.259 00:18:56 -- setup/hugepages.sh@27 -- # local node 00:05:02.259 00:18:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.259 00:18:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:02.259 00:18:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.259 00:18:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.259 00:18:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.259 00:18:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.259 00:18:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.259 00:18:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.259 00:18:56 -- setup/common.sh@18 -- # local node=0 00:05:02.259 00:18:56 -- setup/common.sh@19 -- # local var val 00:05:02.259 00:18:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.259 00:18:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.259 00:18:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.259 00:18:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.259 00:18:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.259 00:18:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6012432 kB' 'MemUsed: 6230548 kB' 'SwapCached: 0 kB' 'Active: 1011876 kB' 'Inactive: 3774852 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 147064 kB' 'Active(file): 1010832 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 564 kB' 'Writeback: 0 kB' 'FilePages: 4650280 kB' 'Mapped: 68076 kB' 'AnonPages: 165640 kB' 'Shmem: 2596 kB' 'KernelStack: 4472 kB' 'PageTables: 3820 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197000 kB' 'Slab: 262796 kB' 'SReclaimable: 197000 kB' 'SUnreclaim: 65796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.259 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.259 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # continue 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.260 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.260 00:18:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.260 00:18:56 -- setup/common.sh@33 -- # echo 0 00:05:02.260 00:18:56 -- setup/common.sh@33 -- # return 0 00:05:02.260 00:18:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.260 00:18:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.260 00:18:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.260 00:18:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.260 node0=512 expecting 512 00:05:02.260 00:18:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:02.260 00:18:56 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:02.260 00:05:02.260 real 0m0.782s 00:05:02.260 user 0m0.355s 00:05:02.260 sys 0m0.471s 00:05:02.260 00:18:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.260 00:18:56 -- common/autotest_common.sh@10 -- # set +x 00:05:02.260 ************************************ 00:05:02.260 END TEST per_node_1G_alloc 00:05:02.260 ************************************ 00:05:02.519 00:18:56 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:02.519 00:18:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.519 00:18:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.519 00:18:56 -- common/autotest_common.sh@10 -- # set +x 00:05:02.519 ************************************ 00:05:02.519 START TEST even_2G_alloc 00:05:02.519 ************************************ 00:05:02.519 00:18:56 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:05:02.519 00:18:56 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:02.519 00:18:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:02.519 00:18:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:02.519 00:18:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.519 00:18:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:02.519 00:18:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:02.519 00:18:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:02.519 00:18:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.519 00:18:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:02.519 00:18:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:02.519 00:18:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.519 00:18:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.519 00:18:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:02.519 00:18:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:02.519 00:18:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.519 00:18:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:02.519 00:18:56 -- setup/hugepages.sh@83 -- # : 0 00:05:02.520 00:18:56 -- setup/hugepages.sh@84 -- # : 0 00:05:02.520 00:18:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.520 00:18:56 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:02.520 00:18:56 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:02.520 00:18:56 -- setup/hugepages.sh@153 -- # setup output 00:05:02.520 00:18:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.520 00:18:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.778 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:02.778 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.345 00:18:56 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:03.345 00:18:56 -- setup/hugepages.sh@89 -- # local node 00:05:03.345 00:18:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.345 00:18:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.345 00:18:56 -- setup/hugepages.sh@92 -- # local surp 00:05:03.345 00:18:56 -- setup/hugepages.sh@93 -- # local resv 00:05:03.345 00:18:56 -- setup/hugepages.sh@94 -- # local anon 00:05:03.345 00:18:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.345 00:18:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.345 00:18:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.345 00:18:56 -- setup/common.sh@18 -- # local node= 00:05:03.345 00:18:56 -- setup/common.sh@19 -- # local var val 00:05:03.345 00:18:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.345 00:18:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.345 00:18:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.345 00:18:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.345 00:18:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.345 00:18:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4962564 kB' 'MemAvailable: 9476888 kB' 'Buffers: 35428 kB' 'Cached: 4614860 kB' 'SwapCached: 0 kB' 'Active: 1011888 kB' 'Inactive: 3774804 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 147016 kB' 'Active(file): 1010840 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 404 kB' 'Writeback: 0 kB' 'AnonPages: 165576 kB' 'Mapped: 68080 kB' 'Shmem: 2596 kB' 'KReclaimable: 197000 kB' 'Slab: 263076 kB' 'SReclaimable: 197000 kB' 'SUnreclaim: 66076 kB' 'KernelStack: 4436 kB' 'PageTables: 3644 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 522808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.345 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.346 00:18:56 -- setup/common.sh@33 -- # echo 0 00:05:03.346 00:18:56 -- setup/common.sh@33 -- # return 0 00:05:03.346 00:18:56 -- setup/hugepages.sh@97 -- # anon=0 00:05:03.346 00:18:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.346 00:18:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.346 00:18:56 -- setup/common.sh@18 -- # local node= 00:05:03.346 00:18:56 -- setup/common.sh@19 -- # local var val 00:05:03.346 00:18:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.346 00:18:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.346 00:18:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.346 00:18:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.346 00:18:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.346 00:18:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4962564 kB' 'MemAvailable: 9476888 kB' 'Buffers: 35428 kB' 'Cached: 4614860 kB' 'SwapCached: 0 kB' 'Active: 1011888 kB' 'Inactive: 3774600 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146812 kB' 'Active(file): 1010840 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 404 kB' 'Writeback: 0 kB' 'AnonPages: 165388 kB' 'Mapped: 68080 kB' 'Shmem: 2596 kB' 'KReclaimable: 197000 kB' 'Slab: 263076 kB' 'SReclaimable: 197000 kB' 'SUnreclaim: 66076 kB' 'KernelStack: 4448 kB' 'PageTables: 3844 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 522808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.346 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.347 00:18:57 -- setup/common.sh@33 -- # echo 0 00:05:03.347 00:18:57 -- setup/common.sh@33 -- # return 0 00:05:03.347 00:18:57 -- setup/hugepages.sh@99 -- # surp=0 00:05:03.347 00:18:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.347 00:18:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.347 00:18:57 -- setup/common.sh@18 -- # local node= 00:05:03.347 00:18:57 -- setup/common.sh@19 -- # local var val 00:05:03.347 00:18:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.347 00:18:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.347 00:18:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.347 00:18:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.347 00:18:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.347 00:18:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4962564 kB' 'MemAvailable: 9476888 kB' 'Buffers: 35428 kB' 'Cached: 4614860 kB' 'SwapCached: 0 kB' 'Active: 1011888 kB' 'Inactive: 3774860 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 147072 kB' 'Active(file): 1010840 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 404 kB' 'Writeback: 0 kB' 'AnonPages: 165648 kB' 'Mapped: 68080 kB' 'Shmem: 2596 kB' 'KReclaimable: 197000 kB' 'Slab: 263076 kB' 'SReclaimable: 197000 kB' 'SUnreclaim: 66076 kB' 'KernelStack: 4448 kB' 'PageTables: 3844 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 522808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.348 00:18:57 -- setup/common.sh@33 -- # echo 0 00:05:03.348 00:18:57 -- setup/common.sh@33 -- # return 0 00:05:03.348 nr_hugepages=1024 00:05:03.348 resv_hugepages=0 00:05:03.348 surplus_hugepages=0 00:05:03.348 anon_hugepages=0 00:05:03.348 00:18:57 -- setup/hugepages.sh@100 -- # resv=0 00:05:03.348 00:18:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.348 00:18:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.348 00:18:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.348 00:18:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.348 00:18:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.348 00:18:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.348 00:18:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.348 00:18:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.348 00:18:57 -- setup/common.sh@18 -- # local node= 00:05:03.348 00:18:57 -- setup/common.sh@19 -- # local var val 00:05:03.348 00:18:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.348 00:18:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.348 00:18:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.348 00:18:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.348 00:18:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.348 00:18:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4962564 kB' 'MemAvailable: 9476888 kB' 'Buffers: 35428 kB' 'Cached: 4614860 kB' 'SwapCached: 0 kB' 'Active: 1011888 kB' 'Inactive: 3774600 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146812 kB' 'Active(file): 1010840 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 404 kB' 'Writeback: 0 kB' 'AnonPages: 165128 kB' 'Mapped: 68080 kB' 'Shmem: 2596 kB' 'KReclaimable: 197000 kB' 'Slab: 263076 kB' 'SReclaimable: 197000 kB' 'SUnreclaim: 66076 kB' 'KernelStack: 4448 kB' 'PageTables: 3844 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 522808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.351 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.351 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.352 00:18:57 -- setup/common.sh@33 -- # echo 1024 00:05:03.352 00:18:57 -- setup/common.sh@33 -- # return 0 00:05:03.352 00:18:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.352 00:18:57 -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.352 00:18:57 -- setup/hugepages.sh@27 -- # local node 00:05:03.352 00:18:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.352 00:18:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:03.352 00:18:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:03.352 00:18:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.352 00:18:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.352 00:18:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.352 00:18:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.352 00:18:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.352 00:18:57 -- setup/common.sh@18 -- # local node=0 00:05:03.352 00:18:57 -- setup/common.sh@19 -- # local var val 00:05:03.352 00:18:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:03.352 00:18:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.352 00:18:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.352 00:18:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.352 00:18:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.352 00:18:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4963352 kB' 'MemUsed: 7279628 kB' 'SwapCached: 0 kB' 'Active: 1011888 kB' 'Inactive: 3774340 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 146552 kB' 'Active(file): 1010840 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 404 kB' 'Writeback: 0 kB' 'FilePages: 4650288 kB' 'Mapped: 68080 kB' 'AnonPages: 165128 kB' 'Shmem: 2596 kB' 'KernelStack: 4448 kB' 'PageTables: 3844 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197000 kB' 'Slab: 263076 kB' 'SReclaimable: 197000 kB' 'SUnreclaim: 66076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.352 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.352 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.353 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.353 00:18:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.353 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.353 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.353 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.353 00:18:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.353 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.353 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.353 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.353 00:18:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.353 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.353 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.353 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.353 00:18:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.353 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.353 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.353 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.353 00:18:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.353 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.353 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.353 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.353 00:18:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.353 00:18:57 -- setup/common.sh@32 -- # continue 00:05:03.353 00:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:03.353 00:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:03.353 00:18:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.353 00:18:57 -- setup/common.sh@33 -- # echo 0 00:05:03.353 00:18:57 -- setup/common.sh@33 -- # return 0 00:05:03.353 node0=1024 expecting 1024 00:05:03.353 ************************************ 00:05:03.353 END TEST even_2G_alloc 00:05:03.353 ************************************ 00:05:03.353 00:18:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.353 00:18:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.353 00:18:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.353 00:18:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.353 00:18:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:03.353 00:18:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:03.353 00:05:03.353 real 0m0.969s 00:05:03.353 user 0m0.275s 00:05:03.353 sys 0m0.733s 00:05:03.353 00:18:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:03.353 00:18:57 -- common/autotest_common.sh@10 -- # set +x 00:05:03.610 00:18:57 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:03.610 00:18:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.610 00:18:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.610 00:18:57 -- common/autotest_common.sh@10 -- # set +x 00:05:03.610 ************************************ 00:05:03.610 START TEST odd_alloc 00:05:03.610 ************************************ 00:05:03.610 00:18:57 -- common/autotest_common.sh@1111 -- # odd_alloc 00:05:03.610 00:18:57 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:03.610 00:18:57 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:03.610 00:18:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:03.610 00:18:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.610 00:18:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:03.610 00:18:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:03.610 00:18:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:03.610 00:18:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.610 00:18:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:03.610 00:18:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:03.610 00:18:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.610 00:18:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.610 00:18:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:03.610 00:18:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:03.610 00:18:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.610 00:18:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:03.610 00:18:57 -- setup/hugepages.sh@83 -- # : 0 00:05:03.610 00:18:57 -- setup/hugepages.sh@84 -- # : 0 00:05:03.610 00:18:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:03.610 00:18:57 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:03.610 00:18:57 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:03.610 00:18:57 -- setup/hugepages.sh@160 -- # setup output 00:05:03.610 00:18:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.610 00:18:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:03.868 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.440 00:18:58 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:04.440 00:18:58 -- setup/hugepages.sh@89 -- # local node 00:05:04.440 00:18:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.440 00:18:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.440 00:18:58 -- setup/hugepages.sh@92 -- # local surp 00:05:04.440 00:18:58 -- setup/hugepages.sh@93 -- # local resv 00:05:04.440 00:18:58 -- setup/hugepages.sh@94 -- # local anon 00:05:04.440 00:18:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.440 00:18:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.440 00:18:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.440 00:18:58 -- setup/common.sh@18 -- # local node= 00:05:04.440 00:18:58 -- setup/common.sh@19 -- # local var val 00:05:04.440 00:18:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.440 00:18:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.440 00:18:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.440 00:18:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.440 00:18:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.440 00:18:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4962012 kB' 'MemAvailable: 9476340 kB' 'Buffers: 35428 kB' 'Cached: 4614860 kB' 'SwapCached: 0 kB' 'Active: 1011896 kB' 'Inactive: 3774720 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 146936 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 165892 kB' 'Mapped: 68096 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 263016 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 66012 kB' 'KernelStack: 4416 kB' 'PageTables: 3768 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 523600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.440 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.440 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.441 00:18:58 -- setup/common.sh@33 -- # echo 0 00:05:04.441 00:18:58 -- setup/common.sh@33 -- # return 0 00:05:04.441 00:18:58 -- setup/hugepages.sh@97 -- # anon=0 00:05:04.441 00:18:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.441 00:18:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.441 00:18:58 -- setup/common.sh@18 -- # local node= 00:05:04.441 00:18:58 -- setup/common.sh@19 -- # local var val 00:05:04.441 00:18:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.441 00:18:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.441 00:18:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.441 00:18:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.441 00:18:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.441 00:18:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4962012 kB' 'MemAvailable: 9476340 kB' 'Buffers: 35428 kB' 'Cached: 4614860 kB' 'SwapCached: 0 kB' 'Active: 1011896 kB' 'Inactive: 3774616 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 146832 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627784 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 165548 kB' 'Mapped: 68096 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 263016 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 66012 kB' 'KernelStack: 4400 kB' 'PageTables: 3728 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 523600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.441 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.441 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.442 00:18:58 -- setup/common.sh@33 -- # echo 0 00:05:04.442 00:18:58 -- setup/common.sh@33 -- # return 0 00:05:04.442 00:18:58 -- setup/hugepages.sh@99 -- # surp=0 00:05:04.442 00:18:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.442 00:18:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.442 00:18:58 -- setup/common.sh@18 -- # local node= 00:05:04.442 00:18:58 -- setup/common.sh@19 -- # local var val 00:05:04.442 00:18:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.442 00:18:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.442 00:18:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.442 00:18:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.442 00:18:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.442 00:18:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4961760 kB' 'MemAvailable: 9476092 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011888 kB' 'Inactive: 3774732 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 146944 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 165652 kB' 'Mapped: 68096 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 263032 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 66028 kB' 'KernelStack: 4396 kB' 'PageTables: 3876 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 523600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.442 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.442 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.443 00:18:58 -- setup/common.sh@33 -- # echo 0 00:05:04.443 00:18:58 -- setup/common.sh@33 -- # return 0 00:05:04.443 00:18:58 -- setup/hugepages.sh@100 -- # resv=0 00:05:04.443 nr_hugepages=1025 00:05:04.443 00:18:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:04.443 resv_hugepages=0 00:05:04.443 00:18:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.443 surplus_hugepages=0 00:05:04.443 00:18:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.443 anon_hugepages=0 00:05:04.443 00:18:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.443 00:18:58 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:04.443 00:18:58 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:04.443 00:18:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.443 00:18:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.443 00:18:58 -- setup/common.sh@18 -- # local node= 00:05:04.443 00:18:58 -- setup/common.sh@19 -- # local var val 00:05:04.443 00:18:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.443 00:18:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.443 00:18:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.443 00:18:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.443 00:18:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.443 00:18:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4962000 kB' 'MemAvailable: 9476332 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011888 kB' 'Inactive: 3774368 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 146580 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 165196 kB' 'Mapped: 68096 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 263032 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 66028 kB' 'KernelStack: 4388 kB' 'PageTables: 3524 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 523600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19612 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.443 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.443 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.444 00:18:58 -- setup/common.sh@33 -- # echo 1025 00:05:04.444 00:18:58 -- setup/common.sh@33 -- # return 0 00:05:04.444 00:18:58 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:04.444 00:18:58 -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.444 00:18:58 -- setup/hugepages.sh@27 -- # local node 00:05:04.444 00:18:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.444 00:18:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:04.444 00:18:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:04.444 00:18:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.444 00:18:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.444 00:18:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.444 00:18:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.444 00:18:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.444 00:18:58 -- setup/common.sh@18 -- # local node=0 00:05:04.444 00:18:58 -- setup/common.sh@19 -- # local var val 00:05:04.444 00:18:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:04.444 00:18:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.444 00:18:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.444 00:18:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.444 00:18:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.444 00:18:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4962000 kB' 'MemUsed: 7280980 kB' 'SwapCached: 0 kB' 'Active: 1011888 kB' 'Inactive: 3774368 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 146580 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'FilePages: 4650292 kB' 'Mapped: 68096 kB' 'AnonPages: 165196 kB' 'Shmem: 2596 kB' 'KernelStack: 4388 kB' 'PageTables: 3784 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197004 kB' 'Slab: 263032 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 66028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.444 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.444 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # continue 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:04.445 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:04.445 00:18:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.445 00:18:58 -- setup/common.sh@33 -- # echo 0 00:05:04.445 00:18:58 -- setup/common.sh@33 -- # return 0 00:05:04.445 00:18:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.445 00:18:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.703 00:18:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.703 00:18:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.703 node0=1025 expecting 1025 00:05:04.703 00:18:58 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:04.703 00:18:58 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:04.703 00:05:04.703 real 0m1.031s 00:05:04.703 user 0m0.362s 00:05:04.703 sys 0m0.714s 00:05:04.703 00:18:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.704 00:18:58 -- common/autotest_common.sh@10 -- # set +x 00:05:04.704 ************************************ 00:05:04.704 END TEST odd_alloc 00:05:04.704 ************************************ 00:05:04.704 00:18:58 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:04.704 00:18:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.704 00:18:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.704 00:18:58 -- common/autotest_common.sh@10 -- # set +x 00:05:04.704 ************************************ 00:05:04.704 START TEST custom_alloc 00:05:04.704 ************************************ 00:05:04.704 00:18:58 -- common/autotest_common.sh@1111 -- # custom_alloc 00:05:04.704 00:18:58 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:04.704 00:18:58 -- setup/hugepages.sh@169 -- # local node 00:05:04.704 00:18:58 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:04.704 00:18:58 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:04.704 00:18:58 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:04.704 00:18:58 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:04.704 00:18:58 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:04.704 00:18:58 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:04.704 00:18:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:04.704 00:18:58 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:04.704 00:18:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:04.704 00:18:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.704 00:18:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.704 00:18:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:04.704 00:18:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:04.704 00:18:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.704 00:18:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.704 00:18:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.704 00:18:58 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:04.704 00:18:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.704 00:18:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:04.704 00:18:58 -- setup/hugepages.sh@83 -- # : 0 00:05:04.704 00:18:58 -- setup/hugepages.sh@84 -- # : 0 00:05:04.704 00:18:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:04.704 00:18:58 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:04.704 00:18:58 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:04.704 00:18:58 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:04.704 00:18:58 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:04.704 00:18:58 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:04.704 00:18:58 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:04.704 00:18:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:04.704 00:18:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:04.704 00:18:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:04.704 00:18:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:04.704 00:18:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:04.704 00:18:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:04.704 00:18:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:04.704 00:18:58 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:04.704 00:18:58 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:04.704 00:18:58 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:04.704 00:18:58 -- setup/hugepages.sh@78 -- # return 0 00:05:04.704 00:18:58 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:04.704 00:18:58 -- setup/hugepages.sh@187 -- # setup output 00:05:04.704 00:18:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.704 00:18:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:04.986 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:05.245 00:18:58 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:05.245 00:18:58 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:05.245 00:18:58 -- setup/hugepages.sh@89 -- # local node 00:05:05.245 00:18:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:05.245 00:18:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:05.245 00:18:58 -- setup/hugepages.sh@92 -- # local surp 00:05:05.245 00:18:58 -- setup/hugepages.sh@93 -- # local resv 00:05:05.245 00:18:58 -- setup/hugepages.sh@94 -- # local anon 00:05:05.245 00:18:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:05.245 00:18:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:05.245 00:18:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:05.245 00:18:58 -- setup/common.sh@18 -- # local node= 00:05:05.245 00:18:58 -- setup/common.sh@19 -- # local var val 00:05:05.245 00:18:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.245 00:18:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.245 00:18:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.245 00:18:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.245 00:18:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.245 00:18:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.245 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.245 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.245 00:18:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6017460 kB' 'MemAvailable: 10531792 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011892 kB' 'Inactive: 3769888 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142096 kB' 'Active(file): 1010840 kB' 'Inactive(file): 3627792 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 4 kB' 'Writeback: 0 kB' 'AnonPages: 160688 kB' 'Mapped: 67340 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 262888 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65884 kB' 'KernelStack: 4296 kB' 'PageTables: 3372 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 510232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:05.245 00:18:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.245 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.245 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.245 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.245 00:18:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.245 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.245 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.245 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.245 00:18:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.245 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.245 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.245 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.245 00:18:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.245 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.245 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.245 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.245 00:18:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.245 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.245 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.245 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.245 00:18:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.245 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.245 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # continue 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.246 00:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.246 00:18:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:05.246 00:18:58 -- setup/common.sh@33 -- # echo 0 00:05:05.246 00:18:58 -- setup/common.sh@33 -- # return 0 00:05:05.246 00:18:58 -- setup/hugepages.sh@97 -- # anon=0 00:05:05.246 00:18:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:05.246 00:18:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.246 00:18:58 -- setup/common.sh@18 -- # local node= 00:05:05.246 00:18:58 -- setup/common.sh@19 -- # local var val 00:05:05.246 00:18:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.246 00:18:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.246 00:18:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.246 00:18:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.246 00:18:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.246 00:18:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.246 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6017724 kB' 'MemAvailable: 10532056 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011892 kB' 'Inactive: 3769948 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142156 kB' 'Active(file): 1010840 kB' 'Inactive(file): 3627792 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 4 kB' 'Writeback: 0 kB' 'AnonPages: 160764 kB' 'Mapped: 67340 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 262888 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65884 kB' 'KernelStack: 4328 kB' 'PageTables: 3452 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 512560 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.247 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.247 00:18:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.248 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.248 00:18:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.248 00:18:59 -- setup/common.sh@33 -- # echo 0 00:05:05.248 00:18:59 -- setup/common.sh@33 -- # return 0 00:05:05.248 00:18:59 -- setup/hugepages.sh@99 -- # surp=0 00:05:05.248 00:18:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:05.248 00:18:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:05.248 00:18:59 -- setup/common.sh@18 -- # local node= 00:05:05.248 00:18:59 -- setup/common.sh@19 -- # local var val 00:05:05.248 00:18:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.509 00:18:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.509 00:18:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.509 00:18:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.509 00:18:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.509 00:18:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6017488 kB' 'MemAvailable: 10531820 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011892 kB' 'Inactive: 3770168 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142376 kB' 'Active(file): 1010840 kB' 'Inactive(file): 3627792 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 4 kB' 'Writeback: 0 kB' 'AnonPages: 161024 kB' 'Mapped: 67380 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 262888 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65884 kB' 'KernelStack: 4408 kB' 'PageTables: 3668 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 512564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19484 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.509 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.509 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:05.510 00:18:59 -- setup/common.sh@33 -- # echo 0 00:05:05.510 00:18:59 -- setup/common.sh@33 -- # return 0 00:05:05.510 00:18:59 -- setup/hugepages.sh@100 -- # resv=0 00:05:05.510 nr_hugepages=512 00:05:05.510 00:18:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:05.510 resv_hugepages=0 00:05:05.510 00:18:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:05.510 surplus_hugepages=0 00:05:05.510 00:18:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:05.510 anon_hugepages=0 00:05:05.510 00:18:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:05.510 00:18:59 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:05.510 00:18:59 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:05.510 00:18:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:05.510 00:18:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:05.510 00:18:59 -- setup/common.sh@18 -- # local node= 00:05:05.510 00:18:59 -- setup/common.sh@19 -- # local var val 00:05:05.510 00:18:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.510 00:18:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.510 00:18:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:05.510 00:18:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:05.510 00:18:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.510 00:18:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6017728 kB' 'MemAvailable: 10532060 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011892 kB' 'Inactive: 3769820 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142028 kB' 'Active(file): 1010840 kB' 'Inactive(file): 3627792 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 4 kB' 'Writeback: 0 kB' 'AnonPages: 160676 kB' 'Mapped: 67380 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 262888 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65884 kB' 'KernelStack: 4360 kB' 'PageTables: 3548 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 510232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.510 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.510 00:18:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.511 00:18:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:05.511 00:18:59 -- setup/common.sh@33 -- # echo 512 00:05:05.511 00:18:59 -- setup/common.sh@33 -- # return 0 00:05:05.511 00:18:59 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:05.511 00:18:59 -- setup/hugepages.sh@112 -- # get_nodes 00:05:05.511 00:18:59 -- setup/hugepages.sh@27 -- # local node 00:05:05.511 00:18:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:05.511 00:18:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:05.511 00:18:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:05.511 00:18:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:05.511 00:18:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:05.511 00:18:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:05.511 00:18:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:05.511 00:18:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:05.511 00:18:59 -- setup/common.sh@18 -- # local node=0 00:05:05.511 00:18:59 -- setup/common.sh@19 -- # local var val 00:05:05.511 00:18:59 -- setup/common.sh@20 -- # local mem_f mem 00:05:05.511 00:18:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:05.511 00:18:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:05.511 00:18:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:05.511 00:18:59 -- setup/common.sh@28 -- # mapfile -t mem 00:05:05.511 00:18:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.511 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6017728 kB' 'MemUsed: 6225252 kB' 'SwapCached: 0 kB' 'Active: 1011896 kB' 'Inactive: 3769672 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 141884 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 4 kB' 'Writeback: 0 kB' 'FilePages: 4650292 kB' 'Mapped: 67380 kB' 'AnonPages: 160552 kB' 'Shmem: 2596 kB' 'KernelStack: 4396 kB' 'PageTables: 3472 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197004 kB' 'Slab: 262888 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # continue 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:05:05.512 00:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:05:05.512 00:18:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:05.512 00:18:59 -- setup/common.sh@33 -- # echo 0 00:05:05.512 00:18:59 -- setup/common.sh@33 -- # return 0 00:05:05.512 00:18:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:05.512 00:18:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:05.512 00:18:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:05.512 00:18:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:05.512 00:18:59 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:05.512 node0=512 expecting 512 00:05:05.512 00:18:59 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:05.512 00:05:05.512 real 0m0.785s 00:05:05.512 user 0m0.326s 00:05:05.512 sys 0m0.502s 00:05:05.512 00:18:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.512 00:18:59 -- common/autotest_common.sh@10 -- # set +x 00:05:05.512 ************************************ 00:05:05.512 END TEST custom_alloc 00:05:05.512 ************************************ 00:05:05.512 00:18:59 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:05.512 00:18:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.512 00:18:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.512 00:18:59 -- common/autotest_common.sh@10 -- # set +x 00:05:05.512 ************************************ 00:05:05.512 START TEST no_shrink_alloc 00:05:05.512 ************************************ 00:05:05.512 00:18:59 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:05:05.513 00:18:59 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:05.513 00:18:59 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:05.513 00:18:59 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:05.513 00:18:59 -- setup/hugepages.sh@51 -- # shift 00:05:05.513 00:18:59 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:05.513 00:18:59 -- setup/hugepages.sh@52 -- # local node_ids 00:05:05.513 00:18:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:05.513 00:18:59 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:05.513 00:18:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:05.513 00:18:59 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:05.513 00:18:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:05.513 00:18:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:05.513 00:18:59 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:05.513 00:18:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:05.513 00:18:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:05.513 00:18:59 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:05.513 00:18:59 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:05.513 00:18:59 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:05.513 00:18:59 -- setup/hugepages.sh@73 -- # return 0 00:05:05.513 00:18:59 -- setup/hugepages.sh@198 -- # setup output 00:05:05.513 00:18:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.513 00:18:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.771 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:06.030 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:06.289 00:19:00 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:06.289 00:19:00 -- setup/hugepages.sh@89 -- # local node 00:05:06.289 00:19:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:06.289 00:19:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:06.289 00:19:00 -- setup/hugepages.sh@92 -- # local surp 00:05:06.289 00:19:00 -- setup/hugepages.sh@93 -- # local resv 00:05:06.289 00:19:00 -- setup/hugepages.sh@94 -- # local anon 00:05:06.289 00:19:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:06.289 00:19:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:06.289 00:19:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:06.289 00:19:00 -- setup/common.sh@18 -- # local node= 00:05:06.289 00:19:00 -- setup/common.sh@19 -- # local var val 00:05:06.289 00:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.289 00:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.289 00:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.289 00:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.289 00:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.289 00:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4968524 kB' 'MemAvailable: 9482856 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011896 kB' 'Inactive: 3770108 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142320 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 156 kB' 'Writeback: 0 kB' 'AnonPages: 160960 kB' 'Mapped: 67396 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 262808 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65804 kB' 'KernelStack: 4304 kB' 'PageTables: 3424 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 509848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.289 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.289 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.550 00:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.550 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.550 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.550 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:06.551 00:19:00 -- setup/common.sh@33 -- # echo 0 00:05:06.551 00:19:00 -- setup/common.sh@33 -- # return 0 00:05:06.551 00:19:00 -- setup/hugepages.sh@97 -- # anon=0 00:05:06.551 00:19:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:06.551 00:19:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.551 00:19:00 -- setup/common.sh@18 -- # local node= 00:05:06.551 00:19:00 -- setup/common.sh@19 -- # local var val 00:05:06.551 00:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.551 00:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.551 00:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.551 00:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.551 00:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.551 00:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4968524 kB' 'MemAvailable: 9482856 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011896 kB' 'Inactive: 3770104 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142316 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 156 kB' 'Writeback: 0 kB' 'AnonPages: 160664 kB' 'Mapped: 67396 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 262808 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65804 kB' 'KernelStack: 4304 kB' 'PageTables: 3424 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 510232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19500 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.551 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.551 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.552 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.552 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.552 00:19:00 -- setup/common.sh@33 -- # echo 0 00:05:06.552 00:19:00 -- setup/common.sh@33 -- # return 0 00:05:06.552 00:19:00 -- setup/hugepages.sh@99 -- # surp=0 00:05:06.552 00:19:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:06.552 00:19:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:06.552 00:19:00 -- setup/common.sh@18 -- # local node= 00:05:06.552 00:19:00 -- setup/common.sh@19 -- # local var val 00:05:06.552 00:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.552 00:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.552 00:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.552 00:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.552 00:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.552 00:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.553 00:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4968524 kB' 'MemAvailable: 9482856 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011896 kB' 'Inactive: 3770328 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142540 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 156 kB' 'Writeback: 0 kB' 'AnonPages: 160888 kB' 'Mapped: 67356 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 262808 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65804 kB' 'KernelStack: 4288 kB' 'PageTables: 3388 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 510232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.553 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.553 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:06.554 00:19:00 -- setup/common.sh@33 -- # echo 0 00:05:06.554 00:19:00 -- setup/common.sh@33 -- # return 0 00:05:06.554 00:19:00 -- setup/hugepages.sh@100 -- # resv=0 00:05:06.554 nr_hugepages=1024 00:05:06.554 00:19:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:06.554 resv_hugepages=0 00:05:06.554 00:19:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:06.554 surplus_hugepages=0 00:05:06.554 00:19:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:06.554 anon_hugepages=0 00:05:06.554 00:19:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:06.554 00:19:00 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.554 00:19:00 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:06.554 00:19:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:06.554 00:19:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:06.554 00:19:00 -- setup/common.sh@18 -- # local node= 00:05:06.554 00:19:00 -- setup/common.sh@19 -- # local var val 00:05:06.554 00:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.554 00:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.554 00:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:06.554 00:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:06.554 00:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.554 00:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4968276 kB' 'MemAvailable: 9482608 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011892 kB' 'Inactive: 3769992 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142204 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 156 kB' 'Writeback: 0 kB' 'AnonPages: 160840 kB' 'Mapped: 67344 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 262808 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65804 kB' 'KernelStack: 4384 kB' 'PageTables: 3612 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 510232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19516 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.554 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.554 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:06.555 00:19:00 -- setup/common.sh@33 -- # echo 1024 00:05:06.555 00:19:00 -- setup/common.sh@33 -- # return 0 00:05:06.555 00:19:00 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:06.555 00:19:00 -- setup/hugepages.sh@112 -- # get_nodes 00:05:06.555 00:19:00 -- setup/hugepages.sh@27 -- # local node 00:05:06.555 00:19:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:06.555 00:19:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:06.555 00:19:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:06.555 00:19:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:06.555 00:19:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:06.555 00:19:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:06.555 00:19:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:06.555 00:19:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:06.555 00:19:00 -- setup/common.sh@18 -- # local node=0 00:05:06.555 00:19:00 -- setup/common.sh@19 -- # local var val 00:05:06.555 00:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:06.555 00:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:06.555 00:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:06.555 00:19:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:06.555 00:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:06.555 00:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4968276 kB' 'MemUsed: 7274704 kB' 'SwapCached: 0 kB' 'Active: 1011892 kB' 'Inactive: 3769880 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 142092 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 156 kB' 'Writeback: 0 kB' 'FilePages: 4650292 kB' 'Mapped: 67344 kB' 'AnonPages: 160988 kB' 'Shmem: 2596 kB' 'KernelStack: 4368 kB' 'PageTables: 3572 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197004 kB' 'Slab: 262808 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.555 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.555 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # continue 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:06.556 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:06.556 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:06.556 00:19:00 -- setup/common.sh@33 -- # echo 0 00:05:06.556 00:19:00 -- setup/common.sh@33 -- # return 0 00:05:06.556 00:19:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:06.556 00:19:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:06.556 00:19:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:06.556 00:19:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:06.556 node0=1024 expecting 1024 00:05:06.556 00:19:00 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:06.556 00:19:00 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:06.556 00:19:00 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:06.556 00:19:00 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:06.556 00:19:00 -- setup/hugepages.sh@202 -- # setup output 00:05:06.556 00:19:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.556 00:19:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:06.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:06.815 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:07.077 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:07.077 00:19:00 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:07.077 00:19:00 -- setup/hugepages.sh@89 -- # local node 00:05:07.077 00:19:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.077 00:19:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.077 00:19:00 -- setup/hugepages.sh@92 -- # local surp 00:05:07.077 00:19:00 -- setup/hugepages.sh@93 -- # local resv 00:05:07.077 00:19:00 -- setup/hugepages.sh@94 -- # local anon 00:05:07.077 00:19:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.077 00:19:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.077 00:19:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.077 00:19:00 -- setup/common.sh@18 -- # local node= 00:05:07.077 00:19:00 -- setup/common.sh@19 -- # local var val 00:05:07.077 00:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.077 00:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.077 00:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.077 00:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.077 00:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.077 00:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4966544 kB' 'MemAvailable: 9480876 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011908 kB' 'Inactive: 3770948 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 143160 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 156 kB' 'Writeback: 0 kB' 'AnonPages: 161912 kB' 'Mapped: 67432 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 262936 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65932 kB' 'KernelStack: 4464 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 510232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.077 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.077 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.078 00:19:00 -- setup/common.sh@33 -- # echo 0 00:05:07.078 00:19:00 -- setup/common.sh@33 -- # return 0 00:05:07.078 00:19:00 -- setup/hugepages.sh@97 -- # anon=0 00:05:07.078 00:19:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.078 00:19:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.078 00:19:00 -- setup/common.sh@18 -- # local node= 00:05:07.078 00:19:00 -- setup/common.sh@19 -- # local var val 00:05:07.078 00:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.078 00:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.078 00:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.078 00:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.078 00:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.078 00:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4966524 kB' 'MemAvailable: 9480856 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011908 kB' 'Inactive: 3770584 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 142796 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 156 kB' 'Writeback: 0 kB' 'AnonPages: 161552 kB' 'Mapped: 67472 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 263064 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 66060 kB' 'KernelStack: 4408 kB' 'PageTables: 3560 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 510232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.078 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.078 00:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.079 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.079 00:19:00 -- setup/common.sh@33 -- # echo 0 00:05:07.079 00:19:00 -- setup/common.sh@33 -- # return 0 00:05:07.079 00:19:00 -- setup/hugepages.sh@99 -- # surp=0 00:05:07.079 00:19:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.079 00:19:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.079 00:19:00 -- setup/common.sh@18 -- # local node= 00:05:07.079 00:19:00 -- setup/common.sh@19 -- # local var val 00:05:07.079 00:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.079 00:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.079 00:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.079 00:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.079 00:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.079 00:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.079 00:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4966272 kB' 'MemAvailable: 9480604 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011896 kB' 'Inactive: 3769828 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142040 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 156 kB' 'Writeback: 0 kB' 'AnonPages: 160884 kB' 'Mapped: 67424 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 262988 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65984 kB' 'KernelStack: 4288 kB' 'PageTables: 3376 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 510232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19532 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:07.079 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.080 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.080 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.081 00:19:00 -- setup/common.sh@33 -- # echo 0 00:05:07.081 00:19:00 -- setup/common.sh@33 -- # return 0 00:05:07.081 00:19:00 -- setup/hugepages.sh@100 -- # resv=0 00:05:07.081 nr_hugepages=1024 00:05:07.081 00:19:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:07.081 resv_hugepages=0 00:05:07.081 00:19:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.081 surplus_hugepages=0 00:05:07.081 00:19:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.081 anon_hugepages=0 00:05:07.081 00:19:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.081 00:19:00 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.081 00:19:00 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:07.081 00:19:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.081 00:19:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.081 00:19:00 -- setup/common.sh@18 -- # local node= 00:05:07.081 00:19:00 -- setup/common.sh@19 -- # local var val 00:05:07.081 00:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.081 00:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.081 00:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.081 00:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.081 00:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.081 00:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4965768 kB' 'MemAvailable: 9480100 kB' 'Buffers: 35428 kB' 'Cached: 4614864 kB' 'SwapCached: 0 kB' 'Active: 1011896 kB' 'Inactive: 3769828 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142040 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 156 kB' 'Writeback: 0 kB' 'AnonPages: 160884 kB' 'Mapped: 67424 kB' 'Shmem: 2596 kB' 'KReclaimable: 197004 kB' 'Slab: 262988 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65984 kB' 'KernelStack: 4356 kB' 'PageTables: 3376 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 510232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.081 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.081 00:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.082 00:19:00 -- setup/common.sh@33 -- # echo 1024 00:05:07.082 00:19:00 -- setup/common.sh@33 -- # return 0 00:05:07.082 00:19:00 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.082 00:19:00 -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.082 00:19:00 -- setup/hugepages.sh@27 -- # local node 00:05:07.082 00:19:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.082 00:19:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:07.082 00:19:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:07.082 00:19:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.082 00:19:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.082 00:19:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.082 00:19:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.082 00:19:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.082 00:19:00 -- setup/common.sh@18 -- # local node=0 00:05:07.082 00:19:00 -- setup/common.sh@19 -- # local var val 00:05:07.082 00:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:05:07.082 00:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.082 00:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.082 00:19:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.082 00:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.082 00:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 4965768 kB' 'MemUsed: 7277212 kB' 'SwapCached: 0 kB' 'Active: 1011896 kB' 'Inactive: 3769804 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142016 kB' 'Active(file): 1010844 kB' 'Inactive(file): 3627788 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 156 kB' 'Writeback: 0 kB' 'FilePages: 4650292 kB' 'Mapped: 67424 kB' 'AnonPages: 160600 kB' 'Shmem: 2596 kB' 'KernelStack: 4340 kB' 'PageTables: 3596 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197004 kB' 'Slab: 262988 kB' 'SReclaimable: 197004 kB' 'SUnreclaim: 65984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.082 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.082 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # continue 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:05:07.083 00:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:05:07.083 00:19:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.083 00:19:00 -- setup/common.sh@33 -- # echo 0 00:05:07.083 00:19:00 -- setup/common.sh@33 -- # return 0 00:05:07.083 00:19:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.083 00:19:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.083 00:19:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.083 00:19:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.083 00:19:00 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:07.083 node0=1024 expecting 1024 00:05:07.083 00:19:00 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:07.083 00:05:07.083 real 0m1.568s 00:05:07.083 user 0m0.610s 00:05:07.083 sys 0m1.044s 00:05:07.083 00:19:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.083 00:19:00 -- common/autotest_common.sh@10 -- # set +x 00:05:07.083 ************************************ 00:05:07.083 END TEST no_shrink_alloc 00:05:07.083 ************************************ 00:05:07.083 00:19:00 -- setup/hugepages.sh@217 -- # clear_hp 00:05:07.083 00:19:00 -- setup/hugepages.sh@37 -- # local node hp 00:05:07.083 00:19:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:07.083 00:19:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.083 00:19:00 -- setup/hugepages.sh@41 -- # echo 0 00:05:07.083 00:19:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.083 00:19:00 -- setup/hugepages.sh@41 -- # echo 0 00:05:07.083 00:19:00 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:07.083 00:19:00 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:07.083 ************************************ 00:05:07.083 END TEST hugepages 00:05:07.083 ************************************ 00:05:07.083 00:05:07.083 real 0m7.079s 00:05:07.083 user 0m2.636s 00:05:07.083 sys 0m4.706s 00:05:07.083 00:19:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.083 00:19:00 -- common/autotest_common.sh@10 -- # set +x 00:05:07.342 00:19:00 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:07.342 00:19:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.342 00:19:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.342 00:19:00 -- common/autotest_common.sh@10 -- # set +x 00:05:07.342 ************************************ 00:05:07.342 START TEST driver 00:05:07.342 ************************************ 00:05:07.342 00:19:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:07.342 * Looking for test storage... 00:05:07.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:07.342 00:19:01 -- setup/driver.sh@68 -- # setup reset 00:05:07.342 00:19:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.342 00:19:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:07.910 00:19:01 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:07.910 00:19:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.910 00:19:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.910 00:19:01 -- common/autotest_common.sh@10 -- # set +x 00:05:07.910 ************************************ 00:05:07.910 START TEST guess_driver 00:05:07.910 ************************************ 00:05:07.910 00:19:01 -- common/autotest_common.sh@1111 -- # guess_driver 00:05:07.910 00:19:01 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:07.910 00:19:01 -- setup/driver.sh@47 -- # local fail=0 00:05:07.910 00:19:01 -- setup/driver.sh@49 -- # pick_driver 00:05:07.910 00:19:01 -- setup/driver.sh@36 -- # vfio 00:05:07.910 00:19:01 -- setup/driver.sh@21 -- # local iommu_grups 00:05:07.910 00:19:01 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:07.910 00:19:01 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:07.910 00:19:01 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:07.910 00:19:01 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:07.910 00:19:01 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:07.910 00:19:01 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:07.910 00:19:01 -- setup/driver.sh@32 -- # return 1 00:05:07.910 00:19:01 -- setup/driver.sh@38 -- # uio 00:05:07.910 00:19:01 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:07.910 00:19:01 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:07.910 00:19:01 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:07.910 00:19:01 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:07.910 00:19:01 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:05:07.910 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:07.910 00:19:01 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:07.910 00:19:01 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:07.910 00:19:01 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:07.910 Looking for driver=uio_pci_generic 00:05:07.910 00:19:01 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:07.910 00:19:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:07.911 00:19:01 -- setup/driver.sh@45 -- # setup output config 00:05:07.911 00:19:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.911 00:19:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.477 00:19:02 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:08.477 00:19:02 -- setup/driver.sh@58 -- # continue 00:05:08.477 00:19:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.477 00:19:02 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.477 00:19:02 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:08.477 00:19:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.868 00:19:03 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:09.868 00:19:03 -- setup/driver.sh@65 -- # setup reset 00:05:09.868 00:19:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.868 00:19:03 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.159 00:05:10.159 real 0m2.154s 00:05:10.159 user 0m0.554s 00:05:10.159 sys 0m1.613s 00:05:10.159 00:19:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.159 00:19:03 -- common/autotest_common.sh@10 -- # set +x 00:05:10.159 ************************************ 00:05:10.159 END TEST guess_driver 00:05:10.159 ************************************ 00:05:10.159 00:05:10.159 real 0m2.921s 00:05:10.159 user 0m0.899s 00:05:10.159 sys 0m2.037s 00:05:10.159 00:19:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.159 00:19:03 -- common/autotest_common.sh@10 -- # set +x 00:05:10.159 ************************************ 00:05:10.159 END TEST driver 00:05:10.159 ************************************ 00:05:10.159 00:19:03 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:10.159 00:19:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.159 00:19:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.159 00:19:03 -- common/autotest_common.sh@10 -- # set +x 00:05:10.159 ************************************ 00:05:10.159 START TEST devices 00:05:10.159 ************************************ 00:05:10.159 00:19:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:10.417 * Looking for test storage... 00:05:10.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:10.417 00:19:04 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:10.417 00:19:04 -- setup/devices.sh@192 -- # setup reset 00:05:10.417 00:19:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.417 00:19:04 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.985 00:19:04 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:10.985 00:19:04 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:10.985 00:19:04 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:10.985 00:19:04 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:10.985 00:19:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:10.985 00:19:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:10.985 00:19:04 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:10.985 00:19:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.985 00:19:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:10.985 00:19:04 -- setup/devices.sh@196 -- # blocks=() 00:05:10.985 00:19:04 -- setup/devices.sh@196 -- # declare -a blocks 00:05:10.985 00:19:04 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:10.985 00:19:04 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:10.985 00:19:04 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:10.985 00:19:04 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:10.985 00:19:04 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:10.985 00:19:04 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:10.985 00:19:04 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:10.985 00:19:04 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:10.985 00:19:04 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:10.985 00:19:04 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:10.985 00:19:04 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:10.985 No valid GPT data, bailing 00:05:10.985 00:19:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:10.985 00:19:04 -- scripts/common.sh@391 -- # pt= 00:05:10.985 00:19:04 -- scripts/common.sh@392 -- # return 1 00:05:10.985 00:19:04 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:10.985 00:19:04 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:10.985 00:19:04 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:10.985 00:19:04 -- setup/common.sh@80 -- # echo 5368709120 00:05:10.985 00:19:04 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:10.985 00:19:04 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:10.985 00:19:04 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:10.985 00:19:04 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:10.985 00:19:04 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:10.985 00:19:04 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:10.985 00:19:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.985 00:19:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.985 00:19:04 -- common/autotest_common.sh@10 -- # set +x 00:05:10.985 ************************************ 00:05:10.985 START TEST nvme_mount 00:05:10.985 ************************************ 00:05:10.985 00:19:04 -- common/autotest_common.sh@1111 -- # nvme_mount 00:05:10.985 00:19:04 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:10.985 00:19:04 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:10.985 00:19:04 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:10.985 00:19:04 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:10.985 00:19:04 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:10.985 00:19:04 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:10.985 00:19:04 -- setup/common.sh@40 -- # local part_no=1 00:05:10.985 00:19:04 -- setup/common.sh@41 -- # local size=1073741824 00:05:10.985 00:19:04 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:10.985 00:19:04 -- setup/common.sh@44 -- # parts=() 00:05:10.985 00:19:04 -- setup/common.sh@44 -- # local parts 00:05:10.985 00:19:04 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:10.985 00:19:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.985 00:19:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:10.985 00:19:04 -- setup/common.sh@46 -- # (( part++ )) 00:05:10.985 00:19:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.985 00:19:04 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:10.985 00:19:04 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:10.985 00:19:04 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:12.382 Creating new GPT entries in memory. 00:05:12.382 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:12.382 other utilities. 00:05:12.382 00:19:05 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:12.382 00:19:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.382 00:19:05 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.382 00:19:05 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.382 00:19:05 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:13.316 Creating new GPT entries in memory. 00:05:13.316 The operation has completed successfully. 00:05:13.316 00:19:06 -- setup/common.sh@57 -- # (( part++ )) 00:05:13.316 00:19:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.316 00:19:06 -- setup/common.sh@62 -- # wait 103884 00:05:13.316 00:19:06 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.316 00:19:06 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:13.316 00:19:06 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.316 00:19:06 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:13.316 00:19:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:13.316 00:19:06 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.316 00:19:06 -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.316 00:19:06 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:13.316 00:19:06 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:13.316 00:19:06 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.316 00:19:06 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:13.316 00:19:06 -- setup/devices.sh@53 -- # local found=0 00:05:13.316 00:19:06 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.316 00:19:06 -- setup/devices.sh@56 -- # : 00:05:13.316 00:19:06 -- setup/devices.sh@59 -- # local pci status 00:05:13.316 00:19:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.316 00:19:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:13.316 00:19:06 -- setup/devices.sh@47 -- # setup output config 00:05:13.316 00:19:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.316 00:19:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:13.316 00:19:07 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:13.316 00:19:07 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:13.316 00:19:07 -- setup/devices.sh@63 -- # found=1 00:05:13.316 00:19:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.316 00:19:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:13.316 00:19:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.575 00:19:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:13.575 00:19:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.539 00:19:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.539 00:19:08 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:14.539 00:19:08 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.539 00:19:08 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.539 00:19:08 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.539 00:19:08 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:14.539 00:19:08 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.539 00:19:08 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.539 00:19:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.539 00:19:08 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:14.539 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:14.539 00:19:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.539 00:19:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:14.539 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:14.539 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:14.539 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:14.539 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:14.539 00:19:08 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:14.539 00:19:08 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:14.539 00:19:08 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.539 00:19:08 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:14.539 00:19:08 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:14.539 00:19:08 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.539 00:19:08 -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.539 00:19:08 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:14.539 00:19:08 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:14.539 00:19:08 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:14.539 00:19:08 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:14.539 00:19:08 -- setup/devices.sh@53 -- # local found=0 00:05:14.539 00:19:08 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.539 00:19:08 -- setup/devices.sh@56 -- # : 00:05:14.539 00:19:08 -- setup/devices.sh@59 -- # local pci status 00:05:14.539 00:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.539 00:19:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:14.539 00:19:08 -- setup/devices.sh@47 -- # setup output config 00:05:14.539 00:19:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.539 00:19:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.797 00:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:14.797 00:19:08 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:14.797 00:19:08 -- setup/devices.sh@63 -- # found=1 00:05:14.797 00:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.797 00:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:14.797 00:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.057 00:19:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:15.057 00:19:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.991 00:19:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.991 00:19:09 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:15.991 00:19:09 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.991 00:19:09 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.991 00:19:09 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:15.991 00:19:09 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:15.991 00:19:09 -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:05:15.991 00:19:09 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:15.991 00:19:09 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:15.991 00:19:09 -- setup/devices.sh@50 -- # local mount_point= 00:05:15.991 00:19:09 -- setup/devices.sh@51 -- # local test_file= 00:05:15.991 00:19:09 -- setup/devices.sh@53 -- # local found=0 00:05:15.991 00:19:09 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:15.991 00:19:09 -- setup/devices.sh@59 -- # local pci status 00:05:15.991 00:19:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.991 00:19:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:15.991 00:19:09 -- setup/devices.sh@47 -- # setup output config 00:05:15.991 00:19:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.991 00:19:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.249 00:19:09 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:16.249 00:19:09 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:16.249 00:19:09 -- setup/devices.sh@63 -- # found=1 00:05:16.249 00:19:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.249 00:19:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:16.249 00:19:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.507 00:19:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:16.507 00:19:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.452 00:19:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.452 00:19:11 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.452 00:19:11 -- setup/devices.sh@68 -- # return 0 00:05:17.452 00:19:11 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:17.452 00:19:11 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:17.452 00:19:11 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.452 00:19:11 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.452 00:19:11 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.452 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.452 00:05:17.452 real 0m6.283s 00:05:17.452 user 0m0.742s 00:05:17.452 sys 0m3.615s 00:05:17.452 00:19:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.452 ************************************ 00:05:17.452 END TEST nvme_mount 00:05:17.452 ************************************ 00:05:17.452 00:19:11 -- common/autotest_common.sh@10 -- # set +x 00:05:17.452 00:19:11 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:17.452 00:19:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.452 00:19:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.452 00:19:11 -- common/autotest_common.sh@10 -- # set +x 00:05:17.452 ************************************ 00:05:17.452 START TEST dm_mount 00:05:17.452 ************************************ 00:05:17.452 00:19:11 -- common/autotest_common.sh@1111 -- # dm_mount 00:05:17.452 00:19:11 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:17.452 00:19:11 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:17.452 00:19:11 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:17.452 00:19:11 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:17.452 00:19:11 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:17.452 00:19:11 -- setup/common.sh@40 -- # local part_no=2 00:05:17.452 00:19:11 -- setup/common.sh@41 -- # local size=1073741824 00:05:17.452 00:19:11 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:17.452 00:19:11 -- setup/common.sh@44 -- # parts=() 00:05:17.452 00:19:11 -- setup/common.sh@44 -- # local parts 00:05:17.452 00:19:11 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:17.452 00:19:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.452 00:19:11 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.452 00:19:11 -- setup/common.sh@46 -- # (( part++ )) 00:05:17.452 00:19:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.452 00:19:11 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.452 00:19:11 -- setup/common.sh@46 -- # (( part++ )) 00:05:17.452 00:19:11 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.452 00:19:11 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:17.452 00:19:11 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:17.452 00:19:11 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:18.388 Creating new GPT entries in memory. 00:05:18.388 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:18.388 other utilities. 00:05:18.388 00:19:12 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:18.388 00:19:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.388 00:19:12 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.388 00:19:12 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.388 00:19:12 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:19.765 Creating new GPT entries in memory. 00:05:19.765 The operation has completed successfully. 00:05:19.765 00:19:13 -- setup/common.sh@57 -- # (( part++ )) 00:05:19.765 00:19:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.765 00:19:13 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.765 00:19:13 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.765 00:19:13 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:20.766 The operation has completed successfully. 00:05:20.766 00:19:14 -- setup/common.sh@57 -- # (( part++ )) 00:05:20.766 00:19:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.766 00:19:14 -- setup/common.sh@62 -- # wait 104366 00:05:20.766 00:19:14 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:20.766 00:19:14 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.766 00:19:14 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.766 00:19:14 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:20.766 00:19:14 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:20.766 00:19:14 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.766 00:19:14 -- setup/devices.sh@161 -- # break 00:05:20.766 00:19:14 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.766 00:19:14 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:20.766 00:19:14 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:20.766 00:19:14 -- setup/devices.sh@166 -- # dm=dm-0 00:05:20.766 00:19:14 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:20.766 00:19:14 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:20.766 00:19:14 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.766 00:19:14 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:20.766 00:19:14 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.766 00:19:14 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.766 00:19:14 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:20.766 00:19:14 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.766 00:19:14 -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.766 00:19:14 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:20.766 00:19:14 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:20.766 00:19:14 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.766 00:19:14 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:20.766 00:19:14 -- setup/devices.sh@53 -- # local found=0 00:05:20.766 00:19:14 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.766 00:19:14 -- setup/devices.sh@56 -- # : 00:05:20.766 00:19:14 -- setup/devices.sh@59 -- # local pci status 00:05:20.766 00:19:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:20.766 00:19:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.766 00:19:14 -- setup/devices.sh@47 -- # setup output config 00:05:20.766 00:19:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.766 00:19:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.024 00:19:14 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:21.024 00:19:14 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:21.024 00:19:14 -- setup/devices.sh@63 -- # found=1 00:05:21.024 00:19:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.024 00:19:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:21.024 00:19:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.024 00:19:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:21.024 00:19:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.958 00:19:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.958 00:19:15 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:21.958 00:19:15 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:21.958 00:19:15 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.958 00:19:15 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:21.958 00:19:15 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:22.216 00:19:15 -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:22.216 00:19:15 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:22.216 00:19:15 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:22.216 00:19:15 -- setup/devices.sh@50 -- # local mount_point= 00:05:22.216 00:19:15 -- setup/devices.sh@51 -- # local test_file= 00:05:22.216 00:19:15 -- setup/devices.sh@53 -- # local found=0 00:05:22.216 00:19:15 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:22.216 00:19:15 -- setup/devices.sh@59 -- # local pci status 00:05:22.216 00:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.216 00:19:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:22.216 00:19:15 -- setup/devices.sh@47 -- # setup output config 00:05:22.216 00:19:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.216 00:19:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.216 00:19:15 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:22.216 00:19:15 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:22.216 00:19:15 -- setup/devices.sh@63 -- # found=1 00:05:22.216 00:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.216 00:19:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:22.216 00:19:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.475 00:19:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:22.475 00:19:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.410 00:19:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.410 00:19:17 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:23.410 00:19:17 -- setup/devices.sh@68 -- # return 0 00:05:23.410 00:19:17 -- setup/devices.sh@187 -- # cleanup_dm 00:05:23.410 00:19:17 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:23.410 00:19:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.410 00:19:17 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:23.410 00:19:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.410 00:19:17 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:23.410 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.410 00:19:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.410 00:19:17 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:23.410 00:05:23.410 real 0m6.013s 00:05:23.410 user 0m0.507s 00:05:23.410 sys 0m2.363s 00:05:23.410 00:19:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.410 00:19:17 -- common/autotest_common.sh@10 -- # set +x 00:05:23.410 ************************************ 00:05:23.410 END TEST dm_mount 00:05:23.410 ************************************ 00:05:23.410 00:19:17 -- setup/devices.sh@1 -- # cleanup 00:05:23.410 00:19:17 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:23.410 00:19:17 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.410 00:19:17 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.410 00:19:17 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:23.671 00:19:17 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.671 00:19:17 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.671 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.671 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.671 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.671 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.671 00:19:17 -- setup/devices.sh@12 -- # cleanup_dm 00:05:23.671 00:19:17 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:23.671 00:19:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.671 00:19:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.671 00:19:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.671 00:19:17 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.671 00:19:17 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:23.671 00:05:23.671 real 0m13.324s 00:05:23.671 user 0m1.691s 00:05:23.671 sys 0m6.565s 00:05:23.671 00:19:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.671 00:19:17 -- common/autotest_common.sh@10 -- # set +x 00:05:23.671 ************************************ 00:05:23.671 END TEST devices 00:05:23.671 ************************************ 00:05:23.671 ************************************ 00:05:23.671 END TEST setup.sh 00:05:23.671 ************************************ 00:05:23.671 00:05:23.671 real 0m29.426s 00:05:23.671 user 0m7.119s 00:05:23.671 sys 0m17.689s 00:05:23.671 00:19:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.671 00:19:17 -- common/autotest_common.sh@10 -- # set +x 00:05:23.671 00:19:17 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:24.241 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:24.242 Hugepages 00:05:24.242 node hugesize free / total 00:05:24.242 node0 1048576kB 0 / 0 00:05:24.242 node0 2048kB 2048 / 2048 00:05:24.242 00:05:24.242 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:24.242 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:24.500 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:24.500 00:19:18 -- spdk/autotest.sh@130 -- # uname -s 00:05:24.500 00:19:18 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:24.500 00:19:18 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:24.500 00:19:18 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:25.015 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.947 00:19:19 -- common/autotest_common.sh@1518 -- # sleep 1 00:05:26.880 00:19:20 -- common/autotest_common.sh@1519 -- # bdfs=() 00:05:26.880 00:19:20 -- common/autotest_common.sh@1519 -- # local bdfs 00:05:26.880 00:19:20 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:26.880 00:19:20 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:26.880 00:19:20 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:26.880 00:19:20 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:26.880 00:19:20 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.880 00:19:20 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:26.880 00:19:20 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:26.880 00:19:20 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:05:26.880 00:19:20 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:05:26.880 00:19:20 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:27.447 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:27.447 Waiting for block devices as requested 00:05:27.447 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:27.447 00:19:21 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:27.447 00:19:21 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:27.447 00:19:21 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:05:27.447 00:19:21 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:05:27.447 00:19:21 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:27.447 00:19:21 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:05:27.447 00:19:21 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:27.447 00:19:21 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:05:27.447 00:19:21 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:27.447 00:19:21 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:27.447 00:19:21 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:27.447 00:19:21 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:27.447 00:19:21 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:27.447 00:19:21 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:27.447 00:19:21 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:27.447 00:19:21 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:27.712 00:19:21 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:27.712 00:19:21 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:27.712 00:19:21 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:27.712 00:19:21 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:27.712 00:19:21 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:27.712 00:19:21 -- common/autotest_common.sh@1543 -- # continue 00:05:27.712 00:19:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:27.712 00:19:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:27.712 00:19:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.712 00:19:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:27.712 00:19:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:27.712 00:19:21 -- common/autotest_common.sh@10 -- # set +x 00:05:27.712 00:19:21 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:28.241 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:29.179 00:19:22 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:29.179 00:19:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:29.179 00:19:22 -- common/autotest_common.sh@10 -- # set +x 00:05:29.179 00:19:22 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:29.179 00:19:22 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:05:29.179 00:19:22 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:05:29.179 00:19:22 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:29.179 00:19:22 -- common/autotest_common.sh@1563 -- # local bdfs 00:05:29.179 00:19:22 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:05:29.179 00:19:22 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:29.179 00:19:22 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:29.179 00:19:22 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.179 00:19:22 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:29.179 00:19:22 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:29.179 00:19:22 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:05:29.179 00:19:22 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:05:29.179 00:19:22 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:29.179 00:19:22 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:29.179 00:19:22 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:29.179 00:19:22 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:29.179 00:19:22 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:05:29.179 00:19:22 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:05:29.179 00:19:22 -- common/autotest_common.sh@1579 -- # return 0 00:05:29.179 00:19:22 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:05:29.179 00:19:22 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:29.179 00:19:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.179 00:19:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.179 00:19:22 -- common/autotest_common.sh@10 -- # set +x 00:05:29.438 ************************************ 00:05:29.438 START TEST unittest 00:05:29.438 ************************************ 00:05:29.438 00:19:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:29.438 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:29.438 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:29.438 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:29.438 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:29.438 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:29.438 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:29.438 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:29.438 ++ rpc_py=rpc_cmd 00:05:29.438 ++ set -e 00:05:29.438 ++ shopt -s nullglob 00:05:29.438 ++ shopt -s extglob 00:05:29.438 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:29.438 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:29.438 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:29.438 +++ CONFIG_WPDK_DIR= 00:05:29.438 +++ CONFIG_ASAN=y 00:05:29.438 +++ CONFIG_VBDEV_COMPRESS=n 00:05:29.438 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:29.438 +++ CONFIG_USDT=n 00:05:29.438 +++ CONFIG_CUSTOMOCF=n 00:05:29.438 +++ CONFIG_PREFIX=/usr/local 00:05:29.438 +++ CONFIG_RBD=n 00:05:29.438 +++ CONFIG_LIBDIR= 00:05:29.438 +++ CONFIG_IDXD=y 00:05:29.438 +++ CONFIG_NVME_CUSE=y 00:05:29.438 +++ CONFIG_SMA=n 00:05:29.438 +++ CONFIG_VTUNE=n 00:05:29.438 +++ CONFIG_TSAN=n 00:05:29.438 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:29.438 +++ CONFIG_VFIO_USER_DIR= 00:05:29.438 +++ CONFIG_PGO_CAPTURE=n 00:05:29.438 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:29.439 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:29.439 +++ CONFIG_LTO=n 00:05:29.439 +++ CONFIG_ISCSI_INITIATOR=y 00:05:29.439 +++ CONFIG_CET=n 00:05:29.439 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:29.439 +++ CONFIG_OCF_PATH= 00:05:29.439 +++ CONFIG_RDMA_SET_TOS=y 00:05:29.439 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:29.439 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:29.439 +++ CONFIG_UBLK=n 00:05:29.439 +++ CONFIG_ISAL_CRYPTO=y 00:05:29.439 +++ CONFIG_OPENSSL_PATH= 00:05:29.439 +++ CONFIG_OCF=n 00:05:29.439 +++ CONFIG_FUSE=n 00:05:29.439 +++ CONFIG_VTUNE_DIR= 00:05:29.439 +++ CONFIG_FUZZER_LIB= 00:05:29.439 +++ CONFIG_FUZZER=n 00:05:29.439 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:29.439 +++ CONFIG_CRYPTO=n 00:05:29.439 +++ CONFIG_PGO_USE=n 00:05:29.439 +++ CONFIG_VHOST=y 00:05:29.439 +++ CONFIG_DAOS=n 00:05:29.439 +++ CONFIG_DPDK_INC_DIR= 00:05:29.439 +++ CONFIG_DAOS_DIR= 00:05:29.439 +++ CONFIG_UNIT_TESTS=y 00:05:29.439 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:29.439 +++ CONFIG_VIRTIO=y 00:05:29.439 +++ CONFIG_COVERAGE=y 00:05:29.439 +++ CONFIG_RDMA=y 00:05:29.439 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:29.439 +++ CONFIG_URING_PATH= 00:05:29.439 +++ CONFIG_XNVME=n 00:05:29.439 +++ CONFIG_VFIO_USER=n 00:05:29.439 +++ CONFIG_ARCH=native 00:05:29.439 +++ CONFIG_HAVE_EVP_MAC=y 00:05:29.439 +++ CONFIG_URING_ZNS=n 00:05:29.439 +++ CONFIG_WERROR=y 00:05:29.439 +++ CONFIG_HAVE_LIBBSD=n 00:05:29.439 +++ CONFIG_UBSAN=y 00:05:29.439 +++ CONFIG_IPSEC_MB_DIR= 00:05:29.439 +++ CONFIG_GOLANG=n 00:05:29.439 +++ CONFIG_ISAL=y 00:05:29.439 +++ CONFIG_IDXD_KERNEL=n 00:05:29.439 +++ CONFIG_DPDK_LIB_DIR= 00:05:29.439 +++ CONFIG_RDMA_PROV=verbs 00:05:29.439 +++ CONFIG_APPS=y 00:05:29.439 +++ CONFIG_SHARED=n 00:05:29.439 +++ CONFIG_HAVE_KEYUTILS=y 00:05:29.439 +++ CONFIG_FC_PATH= 00:05:29.439 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:29.439 +++ CONFIG_FC=n 00:05:29.439 +++ CONFIG_AVAHI=n 00:05:29.439 +++ CONFIG_FIO_PLUGIN=y 00:05:29.439 +++ CONFIG_RAID5F=y 00:05:29.439 +++ CONFIG_EXAMPLES=y 00:05:29.439 +++ CONFIG_TESTS=y 00:05:29.439 +++ CONFIG_CRYPTO_MLX5=n 00:05:29.439 +++ CONFIG_MAX_LCORES= 00:05:29.439 +++ CONFIG_IPSEC_MB=n 00:05:29.439 +++ CONFIG_PGO_DIR= 00:05:29.439 +++ CONFIG_DEBUG=y 00:05:29.439 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:29.439 +++ CONFIG_CROSS_PREFIX= 00:05:29.439 +++ CONFIG_URING=n 00:05:29.439 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:29.439 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:29.439 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:29.439 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:29.439 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:29.439 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:29.439 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:29.439 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:29.439 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:29.439 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:29.439 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:29.439 +++ VHOST_APP=("$_app_dir/vhost") 00:05:29.439 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:29.439 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:29.439 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:29.439 +++ [[ #ifndef SPDK_CONFIG_H 00:05:29.439 #define SPDK_CONFIG_H 00:05:29.439 #define SPDK_CONFIG_APPS 1 00:05:29.439 #define SPDK_CONFIG_ARCH native 00:05:29.439 #define SPDK_CONFIG_ASAN 1 00:05:29.439 #undef SPDK_CONFIG_AVAHI 00:05:29.439 #undef SPDK_CONFIG_CET 00:05:29.439 #define SPDK_CONFIG_COVERAGE 1 00:05:29.439 #define SPDK_CONFIG_CROSS_PREFIX 00:05:29.439 #undef SPDK_CONFIG_CRYPTO 00:05:29.439 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:29.439 #undef SPDK_CONFIG_CUSTOMOCF 00:05:29.439 #undef SPDK_CONFIG_DAOS 00:05:29.439 #define SPDK_CONFIG_DAOS_DIR 00:05:29.439 #define SPDK_CONFIG_DEBUG 1 00:05:29.439 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:29.439 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:29.439 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:29.439 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:29.439 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:29.439 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:29.439 #define SPDK_CONFIG_EXAMPLES 1 00:05:29.439 #undef SPDK_CONFIG_FC 00:05:29.439 #define SPDK_CONFIG_FC_PATH 00:05:29.439 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:29.439 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:29.439 #undef SPDK_CONFIG_FUSE 00:05:29.439 #undef SPDK_CONFIG_FUZZER 00:05:29.439 #define SPDK_CONFIG_FUZZER_LIB 00:05:29.439 #undef SPDK_CONFIG_GOLANG 00:05:29.439 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:29.439 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:29.439 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:29.439 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:29.439 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:29.439 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:29.439 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:29.439 #define SPDK_CONFIG_IDXD 1 00:05:29.439 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:29.439 #undef SPDK_CONFIG_IPSEC_MB 00:05:29.439 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:29.439 #define SPDK_CONFIG_ISAL 1 00:05:29.439 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:29.439 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:29.439 #define SPDK_CONFIG_LIBDIR 00:05:29.439 #undef SPDK_CONFIG_LTO 00:05:29.439 #define SPDK_CONFIG_MAX_LCORES 00:05:29.439 #define SPDK_CONFIG_NVME_CUSE 1 00:05:29.439 #undef SPDK_CONFIG_OCF 00:05:29.439 #define SPDK_CONFIG_OCF_PATH 00:05:29.439 #define SPDK_CONFIG_OPENSSL_PATH 00:05:29.439 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:29.439 #define SPDK_CONFIG_PGO_DIR 00:05:29.439 #undef SPDK_CONFIG_PGO_USE 00:05:29.439 #define SPDK_CONFIG_PREFIX /usr/local 00:05:29.439 #define SPDK_CONFIG_RAID5F 1 00:05:29.439 #undef SPDK_CONFIG_RBD 00:05:29.439 #define SPDK_CONFIG_RDMA 1 00:05:29.439 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:29.439 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:29.439 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:29.439 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:29.439 #undef SPDK_CONFIG_SHARED 00:05:29.439 #undef SPDK_CONFIG_SMA 00:05:29.439 #define SPDK_CONFIG_TESTS 1 00:05:29.439 #undef SPDK_CONFIG_TSAN 00:05:29.439 #undef SPDK_CONFIG_UBLK 00:05:29.439 #define SPDK_CONFIG_UBSAN 1 00:05:29.439 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:29.439 #undef SPDK_CONFIG_URING 00:05:29.439 #define SPDK_CONFIG_URING_PATH 00:05:29.439 #undef SPDK_CONFIG_URING_ZNS 00:05:29.439 #undef SPDK_CONFIG_USDT 00:05:29.439 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:29.439 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:29.439 #undef SPDK_CONFIG_VFIO_USER 00:05:29.439 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:29.439 #define SPDK_CONFIG_VHOST 1 00:05:29.439 #define SPDK_CONFIG_VIRTIO 1 00:05:29.439 #undef SPDK_CONFIG_VTUNE 00:05:29.439 #define SPDK_CONFIG_VTUNE_DIR 00:05:29.439 #define SPDK_CONFIG_WERROR 1 00:05:29.439 #define SPDK_CONFIG_WPDK_DIR 00:05:29.439 #undef SPDK_CONFIG_XNVME 00:05:29.439 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:29.439 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:29.439 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.439 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:29.439 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.439 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.439 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:29.439 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:29.439 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:29.439 ++++ export PATH 00:05:29.439 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:29.439 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:29.439 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:29.439 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:29.439 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:29.439 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:29.439 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:29.439 +++ TEST_TAG=N/A 00:05:29.439 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:29.439 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:05:29.439 ++++ uname -s 00:05:29.439 +++ PM_OS=Linux 00:05:29.439 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:29.439 +++ [[ Linux == FreeBSD ]] 00:05:29.439 +++ [[ Linux == Linux ]] 00:05:29.439 +++ [[ QEMU != QEMU ]] 00:05:29.439 +++ MONITOR_RESOURCES_PIDS=() 00:05:29.439 +++ declare -A MONITOR_RESOURCES_PIDS 00:05:29.439 +++ mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:05:29.439 ++ : 0 00:05:29.439 ++ export RUN_NIGHTLY 00:05:29.439 ++ : 0 00:05:29.439 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:29.439 ++ : 0 00:05:29.440 ++ export SPDK_RUN_VALGRIND 00:05:29.440 ++ : 1 00:05:29.440 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:29.440 ++ : 1 00:05:29.440 ++ export SPDK_TEST_UNITTEST 00:05:29.440 ++ : 00:05:29.440 ++ export SPDK_TEST_AUTOBUILD 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_RELEASE_BUILD 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_ISAL 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_ISCSI 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:29.440 ++ : 1 00:05:29.440 ++ export SPDK_TEST_NVME 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_NVME_PMR 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_NVME_BP 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_NVME_CLI 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_NVME_CUSE 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_NVME_FDP 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_NVMF 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_VFIOUSER 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_FUZZER 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_FUZZER_SHORT 00:05:29.440 ++ : rdma 00:05:29.440 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_RBD 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_VHOST 00:05:29.440 ++ : 1 00:05:29.440 ++ export SPDK_TEST_BLOCKDEV 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_IOAT 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_BLOBFS 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_VHOST_INIT 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_LVOL 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:29.440 ++ : 1 00:05:29.440 ++ export SPDK_RUN_ASAN 00:05:29.440 ++ : 1 00:05:29.440 ++ export SPDK_RUN_UBSAN 00:05:29.440 ++ : 00:05:29.440 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_RUN_NON_ROOT 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_CRYPTO 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_FTL 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_OCF 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_VMD 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_OPAL 00:05:29.440 ++ : 00:05:29.440 ++ export SPDK_TEST_NATIVE_DPDK 00:05:29.440 ++ : true 00:05:29.440 ++ export SPDK_AUTOTEST_X 00:05:29.440 ++ : 1 00:05:29.440 ++ export SPDK_TEST_RAID5 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_URING 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_USDT 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_USE_IGB_UIO 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_SCHEDULER 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_SCANBUILD 00:05:29.440 ++ : 00:05:29.440 ++ export SPDK_TEST_NVMF_NICS 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_SMA 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_DAOS 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_XNVME 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_ACCEL_DSA 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_ACCEL_IAA 00:05:29.440 ++ : 00:05:29.440 ++ export SPDK_TEST_FUZZER_TARGET 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_TEST_NVMF_MDNS 00:05:29.440 ++ : 0 00:05:29.440 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:29.440 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:29.440 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:29.440 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:29.440 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:29.440 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:29.440 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:29.440 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:29.440 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:29.440 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:29.440 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:29.440 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:29.440 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:29.440 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:29.440 ++ PYTHONDONTWRITEBYTECODE=1 00:05:29.440 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:29.440 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:29.440 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:29.440 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:29.440 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:29.440 ++ rm -rf /var/tmp/asan_suppression_file 00:05:29.440 ++ cat 00:05:29.440 ++ echo leak:libfuse3.so 00:05:29.440 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:29.440 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:29.440 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:29.440 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:29.440 ++ '[' -z /var/spdk/dependencies ']' 00:05:29.440 ++ export DEPENDENCY_DIR 00:05:29.440 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:29.440 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:29.440 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:29.440 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:29.440 ++ export QEMU_BIN= 00:05:29.440 ++ QEMU_BIN= 00:05:29.440 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:29.440 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:29.440 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:29.440 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:29.440 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:29.440 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:29.440 ++ '[' 0 -eq 0 ']' 00:05:29.440 ++ export valgrind= 00:05:29.440 ++ valgrind= 00:05:29.440 +++ uname -s 00:05:29.440 ++ '[' Linux = Linux ']' 00:05:29.440 ++ HUGEMEM=4096 00:05:29.440 ++ export CLEAR_HUGE=yes 00:05:29.440 ++ CLEAR_HUGE=yes 00:05:29.440 ++ [[ 0 -eq 1 ]] 00:05:29.440 ++ [[ 0 -eq 1 ]] 00:05:29.440 ++ MAKE=make 00:05:29.440 +++ nproc 00:05:29.440 ++ MAKEFLAGS=-j10 00:05:29.440 ++ export HUGEMEM=4096 00:05:29.440 ++ HUGEMEM=4096 00:05:29.440 ++ NO_HUGE=() 00:05:29.440 ++ TEST_MODE= 00:05:29.440 ++ [[ -z '' ]] 00:05:29.440 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:29.440 ++ exec 00:05:29.440 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:29.440 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:29.440 ++ set_test_storage 2147483648 00:05:29.440 ++ [[ -v testdir ]] 00:05:29.440 ++ local requested_size=2147483648 00:05:29.440 ++ local mount target_dir 00:05:29.440 ++ local -A mounts fss sizes avails uses 00:05:29.440 ++ local source fs size avail mount use 00:05:29.440 ++ local storage_fallback storage_candidates 00:05:29.440 +++ mktemp -udt spdk.XXXXXX 00:05:29.440 ++ storage_fallback=/tmp/spdk.BYS58o 00:05:29.440 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:29.440 ++ [[ -n '' ]] 00:05:29.440 ++ [[ -n '' ]] 00:05:29.440 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.BYS58o/tests/unit /tmp/spdk.BYS58o 00:05:29.440 ++ requested_size=2214592512 00:05:29.440 ++ read -r source fs size use avail _ mount 00:05:29.440 +++ df -T 00:05:29.440 +++ grep -v Filesystem 00:05:29.440 ++ mounts["$mount"]=tmpfs 00:05:29.440 ++ fss["$mount"]=tmpfs 00:05:29.440 ++ avails["$mount"]=1252601856 00:05:29.440 ++ sizes["$mount"]=1253683200 00:05:29.440 ++ uses["$mount"]=1081344 00:05:29.440 ++ read -r source fs size use avail _ mount 00:05:29.440 ++ mounts["$mount"]=/dev/vda1 00:05:29.440 ++ fss["$mount"]=ext4 00:05:29.440 ++ avails["$mount"]=10383773696 00:05:29.440 ++ sizes["$mount"]=20616794112 00:05:29.440 ++ uses["$mount"]=10216243200 00:05:29.440 ++ read -r source fs size use avail _ mount 00:05:29.440 ++ mounts["$mount"]=tmpfs 00:05:29.440 ++ fss["$mount"]=tmpfs 00:05:29.440 ++ avails["$mount"]=6268403712 00:05:29.440 ++ sizes["$mount"]=6268403712 00:05:29.440 ++ uses["$mount"]=0 00:05:29.440 ++ read -r source fs size use avail _ mount 00:05:29.440 ++ mounts["$mount"]=tmpfs 00:05:29.440 ++ fss["$mount"]=tmpfs 00:05:29.440 ++ avails["$mount"]=5242880 00:05:29.440 ++ sizes["$mount"]=5242880 00:05:29.440 ++ uses["$mount"]=0 00:05:29.440 ++ read -r source fs size use avail _ mount 00:05:29.440 ++ mounts["$mount"]=/dev/vda15 00:05:29.440 ++ fss["$mount"]=vfat 00:05:29.440 ++ avails["$mount"]=103061504 00:05:29.440 ++ sizes["$mount"]=109395968 00:05:29.440 ++ uses["$mount"]=6334464 00:05:29.440 ++ read -r source fs size use avail _ mount 00:05:29.440 ++ mounts["$mount"]=tmpfs 00:05:29.440 ++ fss["$mount"]=tmpfs 00:05:29.441 ++ avails["$mount"]=1253675008 00:05:29.441 ++ sizes["$mount"]=1253679104 00:05:29.441 ++ uses["$mount"]=4096 00:05:29.441 ++ read -r source fs size use avail _ mount 00:05:29.441 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:05:29.441 ++ fss["$mount"]=fuse.sshfs 00:05:29.441 ++ avails["$mount"]=93960761344 00:05:29.441 ++ sizes["$mount"]=105088212992 00:05:29.441 ++ uses["$mount"]=5742018560 00:05:29.441 ++ read -r source fs size use avail _ mount 00:05:29.441 ++ printf '* Looking for test storage...\n' 00:05:29.441 * Looking for test storage... 00:05:29.441 ++ local target_space new_size 00:05:29.441 ++ for target_dir in "${storage_candidates[@]}" 00:05:29.441 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:29.441 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:29.441 ++ mount=/ 00:05:29.441 ++ target_space=10383773696 00:05:29.441 ++ (( target_space == 0 || target_space < requested_size )) 00:05:29.441 ++ (( target_space >= requested_size )) 00:05:29.441 ++ [[ ext4 == tmpfs ]] 00:05:29.441 ++ [[ ext4 == ramfs ]] 00:05:29.441 ++ [[ / == / ]] 00:05:29.441 ++ new_size=12430835712 00:05:29.441 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:29.441 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:29.441 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:29.441 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:29.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:29.441 ++ return 0 00:05:29.441 ++ set -o errtrace 00:05:29.441 ++ shopt -s extdebug 00:05:29.441 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:29.441 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:29.441 00:19:23 -- common/autotest_common.sh@1673 -- # true 00:05:29.441 00:19:23 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:05:29.441 00:19:23 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:29.441 00:19:23 -- common/autotest_common.sh@29 -- # exec 00:05:29.441 00:19:23 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:29.441 00:19:23 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:29.441 00:19:23 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:29.441 00:19:23 -- common/autotest_common.sh@18 -- # set -x 00:05:29.441 00:19:23 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:29.441 00:19:23 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:05:29.441 00:19:23 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:05:29.441 00:19:23 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:05:29.441 00:19:23 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:29.441 00:19:23 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:05:29.441 00:19:23 -- unit/unittest.sh@179 -- # hash lcov 00:05:29.441 00:19:23 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:29.441 00:19:23 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:29.441 00:19:23 -- unit/unittest.sh@180 -- # cov_avail=yes 00:05:29.441 00:19:23 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:05:29.441 00:19:23 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:29.441 00:19:23 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:29.441 00:19:23 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:29.441 00:19:23 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:05:29.441 --rc lcov_branch_coverage=1 00:05:29.441 --rc lcov_function_coverage=1 00:05:29.441 --rc genhtml_branch_coverage=1 00:05:29.441 --rc genhtml_function_coverage=1 00:05:29.441 --rc genhtml_legend=1 00:05:29.441 --rc geninfo_all_blocks=1 00:05:29.441 ' 00:05:29.441 00:19:23 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:05:29.441 --rc lcov_branch_coverage=1 00:05:29.441 --rc lcov_function_coverage=1 00:05:29.441 --rc genhtml_branch_coverage=1 00:05:29.441 --rc genhtml_function_coverage=1 00:05:29.441 --rc genhtml_legend=1 00:05:29.441 --rc geninfo_all_blocks=1 00:05:29.441 ' 00:05:29.441 00:19:23 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:05:29.441 --rc lcov_branch_coverage=1 00:05:29.441 --rc lcov_function_coverage=1 00:05:29.441 --rc genhtml_branch_coverage=1 00:05:29.441 --rc genhtml_function_coverage=1 00:05:29.441 --rc genhtml_legend=1 00:05:29.441 --rc geninfo_all_blocks=1 00:05:29.441 --no-external' 00:05:29.441 00:19:23 -- unit/unittest.sh@200 -- # LCOV='lcov 00:05:29.441 --rc lcov_branch_coverage=1 00:05:29.441 --rc lcov_function_coverage=1 00:05:29.441 --rc genhtml_branch_coverage=1 00:05:29.441 --rc genhtml_function_coverage=1 00:05:29.441 --rc genhtml_legend=1 00:05:29.441 --rc geninfo_all_blocks=1 00:05:29.441 --no-external' 00:05:29.441 00:19:23 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:37.553 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:37.553 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:52.425 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:52.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:52.425 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:52.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:52.425 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:52.425 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:31.163 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:31.163 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:31.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:31.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:31.164 00:20:22 -- unit/unittest.sh@206 -- # uname -m 00:06:31.164 00:20:22 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:06:31.164 00:20:22 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:31.164 00:20:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.164 00:20:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.164 00:20:22 -- common/autotest_common.sh@10 -- # set +x 00:06:31.164 ************************************ 00:06:31.164 START TEST unittest_pci_event 00:06:31.164 ************************************ 00:06:31.164 00:20:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:31.164 00:06:31.164 00:06:31.164 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.164 http://cunit.sourceforge.net/ 00:06:31.164 00:06:31.164 00:06:31.164 Suite: pci_event 00:06:31.164 Test: test_pci_parse_event ...[2024-04-24 00:20:22.220920] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:31.164 [2024-04-24 00:20:22.221826] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:31.164 passed 00:06:31.164 00:06:31.164 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.164 suites 1 1 n/a 0 0 00:06:31.164 tests 1 1 1 0 0 00:06:31.164 asserts 15 15 15 0 n/a 00:06:31.164 00:06:31.164 Elapsed time = 0.001 seconds 00:06:31.164 00:06:31.164 real 0m0.042s 00:06:31.164 user 0m0.018s 00:06:31.164 sys 0m0.020s 00:06:31.164 00:20:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.164 ************************************ 00:06:31.164 END TEST unittest_pci_event 00:06:31.164 00:20:22 -- common/autotest_common.sh@10 -- # set +x 00:06:31.164 ************************************ 00:06:31.164 00:20:22 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:31.164 00:20:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.164 00:20:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.164 00:20:22 -- common/autotest_common.sh@10 -- # set +x 00:06:31.164 ************************************ 00:06:31.164 START TEST unittest_include 00:06:31.164 ************************************ 00:06:31.164 00:20:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:31.164 00:06:31.164 00:06:31.164 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.164 http://cunit.sourceforge.net/ 00:06:31.164 00:06:31.164 00:06:31.164 Suite: histogram 00:06:31.164 Test: histogram_test ...passed 00:06:31.164 Test: histogram_merge ...passed 00:06:31.164 00:06:31.164 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.164 suites 1 1 n/a 0 0 00:06:31.164 tests 2 2 2 0 0 00:06:31.164 asserts 50 50 50 0 n/a 00:06:31.164 00:06:31.164 Elapsed time = 0.005 seconds 00:06:31.164 00:06:31.164 real 0m0.037s 00:06:31.164 user 0m0.016s 00:06:31.164 sys 0m0.021s 00:06:31.164 00:20:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.164 00:20:22 -- common/autotest_common.sh@10 -- # set +x 00:06:31.164 ************************************ 00:06:31.164 END TEST unittest_include 00:06:31.164 ************************************ 00:06:31.164 00:20:22 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:06:31.164 00:20:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.164 00:20:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.164 00:20:22 -- common/autotest_common.sh@10 -- # set +x 00:06:31.164 ************************************ 00:06:31.164 START TEST unittest_bdev 00:06:31.164 ************************************ 00:06:31.165 00:20:22 -- common/autotest_common.sh@1111 -- # unittest_bdev 00:06:31.165 00:20:22 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:31.165 00:06:31.165 00:06:31.165 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.165 http://cunit.sourceforge.net/ 00:06:31.165 00:06:31.165 00:06:31.165 Suite: bdev 00:06:31.165 Test: bytes_to_blocks_test ...passed 00:06:31.165 Test: num_blocks_test ...passed 00:06:31.165 Test: io_valid_test ...passed 00:06:31.165 Test: open_write_test ...[2024-04-24 00:20:22.561377] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7988:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:31.165 [2024-04-24 00:20:22.561699] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7988:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:31.165 [2024-04-24 00:20:22.561841] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7988:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:31.165 passed 00:06:31.165 Test: claim_test ...passed 00:06:31.165 Test: alias_add_del_test ...[2024-04-24 00:20:22.667507] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:31.165 [2024-04-24 00:20:22.667638] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4578:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:31.165 [2024-04-24 00:20:22.667710] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:31.165 passed 00:06:31.165 Test: get_device_stat_test ...passed 00:06:31.165 Test: bdev_io_types_test ...passed 00:06:31.165 Test: bdev_io_wait_test ...passed 00:06:31.165 Test: bdev_io_spans_split_test ...passed 00:06:31.165 Test: bdev_io_boundary_split_test ...passed 00:06:31.165 Test: bdev_io_max_size_and_segment_split_test ...[2024-04-24 00:20:22.871111] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:31.165 passed 00:06:31.165 Test: bdev_io_mix_split_test ...passed 00:06:31.165 Test: bdev_io_split_with_io_wait ...passed 00:06:31.165 Test: bdev_io_write_unit_split_test ...[2024-04-24 00:20:23.027447] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2740:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:31.165 [2024-04-24 00:20:23.027558] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2740:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:31.165 [2024-04-24 00:20:23.027599] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2740:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:31.165 [2024-04-24 00:20:23.027645] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2740:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:31.165 passed 00:06:31.165 Test: bdev_io_alignment_with_boundary ...passed 00:06:31.165 Test: bdev_io_alignment ...passed 00:06:31.165 Test: bdev_histograms ...passed 00:06:31.165 Test: bdev_write_zeroes ...passed 00:06:31.165 Test: bdev_compare_and_write ...passed 00:06:31.165 Test: bdev_compare ...passed 00:06:31.165 Test: bdev_compare_emulated ...passed 00:06:31.165 Test: bdev_zcopy_write ...passed 00:06:31.165 Test: bdev_zcopy_read ...passed 00:06:31.165 Test: bdev_open_while_hotremove ...passed 00:06:31.165 Test: bdev_close_while_hotremove ...passed 00:06:31.165 Test: bdev_open_ext_test ...[2024-04-24 00:20:23.654612] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8094:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:31.165 passed 00:06:31.165 Test: bdev_open_ext_unregister ...[2024-04-24 00:20:23.654832] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8094:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:31.165 passed 00:06:31.165 Test: bdev_set_io_timeout ...passed 00:06:31.165 Test: bdev_set_qd_sampling ...passed 00:06:31.165 Test: lba_range_overlap ...passed 00:06:31.165 Test: lock_lba_range_check_ranges ...passed 00:06:31.165 Test: lock_lba_range_with_io_outstanding ...passed 00:06:31.165 Test: lock_lba_range_overlapped ...passed 00:06:31.165 Test: bdev_quiesce ...[2024-04-24 00:20:23.953328] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10017:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:31.165 passed 00:06:31.165 Test: bdev_io_abort ...passed 00:06:31.165 Test: bdev_unmap ...passed 00:06:31.165 Test: bdev_write_zeroes_split_test ...passed 00:06:31.165 Test: bdev_set_options_test ...passed 00:06:31.165 Test: bdev_get_memory_domains ...passed 00:06:31.165 Test: bdev_io_ext ...[2024-04-24 00:20:24.147601] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 483:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:31.165 passed 00:06:31.165 Test: bdev_io_ext_no_opts ...passed 00:06:31.165 Test: bdev_io_ext_invalid_opts ...passed 00:06:31.165 Test: bdev_io_ext_split ...passed 00:06:31.165 Test: bdev_io_ext_bounce_buffer ...passed 00:06:31.165 Test: bdev_register_uuid_alias ...[2024-04-24 00:20:24.435785] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 6f4bc7db-2c2a-4cab-a445-ed114b5f8dc0 already exists 00:06:31.165 [2024-04-24 00:20:24.435863] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:6f4bc7db-2c2a-4cab-a445-ed114b5f8dc0 alias for bdev bdev0 00:06:31.165 passed 00:06:31.165 Test: bdev_unregister_by_name ...passed 00:06:31.165 Test: for_each_bdev_test ...[2024-04-24 00:20:24.462395] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7884:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:31.165 [2024-04-24 00:20:24.462462] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7892:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:31.165 passed 00:06:31.165 Test: bdev_seek_test ...passed 00:06:31.165 Test: bdev_copy ...passed 00:06:31.165 Test: bdev_copy_split_test ...passed 00:06:31.165 Test: examine_locks ...passed 00:06:31.165 Test: claim_v2_rwo ...[2024-04-24 00:20:24.615231] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7988:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.615301] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8618:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.615338] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.615400] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.615420] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.615494] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8613:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:31.165 passed 00:06:31.165 Test: claim_v2_rom ...[2024-04-24 00:20:24.615677] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7988:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.615728] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.615752] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.615780] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:31.165 passed 00:06:31.165 Test: claim_v2_rwm ...[2024-04-24 00:20:24.615846] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8656:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:31.165 [2024-04-24 00:20:24.615904] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8651:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:31.165 [2024-04-24 00:20:24.616033] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8686:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:31.165 [2024-04-24 00:20:24.616130] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7988:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.616161] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.616188] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.616215] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.616248] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8706:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:31.165 passed 00:06:31.165 Test: claim_v2_existing_writer ...passed 00:06:31.165 Test: claim_v2_existing_v1 ...[2024-04-24 00:20:24.616306] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8686:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:31.165 [2024-04-24 00:20:24.616439] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8651:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:31.165 [2024-04-24 00:20:24.616470] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8651:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:31.165 passed 00:06:31.165 Test: claim_v1_existing_v2 ...[2024-04-24 00:20:24.616584] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.616622] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.616644] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.616769] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:31.165 passed 00:06:31.165 Test: examine_claimed ...[2024-04-24 00:20:24.616823] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:31.165 [2024-04-24 00:20:24.616859] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8455:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:31.165 passed 00:06:31.165 00:06:31.165 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.165 suites 1 1 n/a 0 0 00:06:31.165 tests 59 59 59 0 0 00:06:31.165 asserts 4599 4599 4599 0 n/a 00:06:31.165 00:06:31.165 Elapsed time = 2.135 seconds 00:06:31.165 [2024-04-24 00:20:24.617171] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8783:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:31.165 00:20:24 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:31.165 00:06:31.165 00:06:31.165 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.166 http://cunit.sourceforge.net/ 00:06:31.166 00:06:31.166 00:06:31.166 Suite: nvme 00:06:31.166 Test: test_create_ctrlr ...passed 00:06:31.166 Test: test_reset_ctrlr ...[2024-04-24 00:20:24.675731] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 passed 00:06:31.166 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:31.166 Test: test_failover_ctrlr ...passed 00:06:31.166 Test: test_race_between_failover_and_add_secondary_trid ...[2024-04-24 00:20:24.677921] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.678095] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.678259] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 passed 00:06:31.166 Test: test_pending_reset ...[2024-04-24 00:20:24.679557] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.679764] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 passed 00:06:31.166 Test: test_attach_ctrlr ...[2024-04-24 00:20:24.680699] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:31.166 passed 00:06:31.166 Test: test_aer_cb ...passed 00:06:31.166 Test: test_submit_nvme_cmd ...passed 00:06:31.166 Test: test_add_remove_trid ...passed 00:06:31.166 Test: test_abort ...[2024-04-24 00:20:24.683591] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7388:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:31.166 passed 00:06:31.166 Test: test_get_io_qpair ...passed 00:06:31.166 Test: test_bdev_unregister ...passed 00:06:31.166 Test: test_compare_ns ...passed 00:06:31.166 Test: test_init_ana_log_page ...passed 00:06:31.166 Test: test_get_memory_domains ...passed 00:06:31.166 Test: test_reconnect_qpair ...passed 00:06:31.166 Test: test_create_bdev_ctrlr ...[2024-04-24 00:20:24.685775] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.686160] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5336:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:31.166 passed 00:06:31.166 Test: test_add_multi_ns_to_bdev ...[2024-04-24 00:20:24.687294] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4528:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:31.166 passed 00:06:31.166 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:31.166 Test: test_admin_path ...passed 00:06:31.166 Test: test_reset_bdev_ctrlr ...passed 00:06:31.166 Test: test_find_io_path ...passed 00:06:31.166 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:31.166 Test: test_retry_io_for_io_path_error ...passed 00:06:31.166 Test: test_retry_io_count ...passed 00:06:31.166 Test: test_concurrent_read_ana_log_page ...passed 00:06:31.166 Test: test_retry_io_for_ana_error ...passed 00:06:31.166 Test: test_check_io_error_resiliency_params ...passed 00:06:31.166 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:06:31.166 Test: test_reconnect_ctrlr ...[2024-04-24 00:20:24.693130] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6018:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:31.166 [2024-04-24 00:20:24.693210] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6022:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:31.166 [2024-04-24 00:20:24.693231] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6031:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:31.166 [2024-04-24 00:20:24.693257] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6034:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:31.166 [2024-04-24 00:20:24.693287] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6046:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:31.166 [2024-04-24 00:20:24.693327] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6046:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:31.166 [2024-04-24 00:20:24.693353] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6026:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:31.166 [2024-04-24 00:20:24.693406] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6041:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:31.166 [2024-04-24 00:20:24.693437] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6038:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:31.166 passed 00:06:31.166 Test: test_retry_failover_ctrlr ...[2024-04-24 00:20:24.694057] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.694165] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.694356] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.694432] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.694512] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.694754] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 passed 00:06:31.166 Test: test_fail_path ...passed 00:06:31.166 Test: test_nvme_ns_cmp ...passed 00:06:31.166 Test: test_ana_transition ...passed 00:06:31.166 Test: test_set_preferred_path ...[2024-04-24 00:20:24.695392] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.695518] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.695613] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.695680] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.695785] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 passed 00:06:31.166 Test: test_find_next_io_path ...passed 00:06:31.166 Test: test_find_io_path_min_qd ...passed 00:06:31.166 Test: test_disable_auto_failback ...passed 00:06:31.166 Test: test_set_multipath_policy ...[2024-04-24 00:20:24.697094] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 passed 00:06:31.166 Test: test_uuid_generation ...passed 00:06:31.166 Test: test_retry_io_to_same_path ...passed 00:06:31.166 Test: test_race_between_reset_and_disconnected ...passed 00:06:31.166 Test: test_ctrlr_op_rpc ...passed 00:06:31.166 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:31.166 Test: test_disable_enable_ctrlr ...passed 00:06:31.166 Test: test_delete_ctrlr_done ...passed 00:06:31.166 Test: test_ns_remove_during_reset ...[2024-04-24 00:20:24.699974] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 [2024-04-24 00:20:24.700093] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:31.166 passed 00:06:31.166 00:06:31.166 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.166 suites 1 1 n/a 0 0 00:06:31.166 tests 48 48 48 0 0 00:06:31.166 asserts 3565 3565 3565 0 n/a 00:06:31.166 00:06:31.166 Elapsed time = 0.026 seconds 00:06:31.166 00:20:24 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:31.166 00:06:31.166 00:06:31.166 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.166 http://cunit.sourceforge.net/ 00:06:31.166 00:06:31.166 Test Options 00:06:31.166 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:06:31.166 00:06:31.166 Suite: raid 00:06:31.166 Test: test_create_raid ...passed 00:06:31.166 Test: test_create_raid_superblock ...passed 00:06:31.166 Test: test_delete_raid ...passed 00:06:31.166 Test: test_create_raid_invalid_args ...[2024-04-24 00:20:24.750534] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1487:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:31.166 [2024-04-24 00:20:24.751420] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:31.166 [2024-04-24 00:20:24.751845] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1471:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:31.166 [2024-04-24 00:20:24.752057] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:31.166 [2024-04-24 00:20:24.752894] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:31.166 passed 00:06:31.166 Test: test_delete_raid_invalid_args ...passed 00:06:31.166 Test: test_io_channel ...passed 00:06:31.166 Test: test_reset_io ...passed 00:06:31.166 Test: test_write_io ...passed 00:06:31.166 Test: test_read_io ...passed 00:06:32.544 Test: test_unmap_io ...passed 00:06:32.545 Test: test_io_failure ...[2024-04-24 00:20:25.940081] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 962:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:06:32.545 passed 00:06:32.545 Test: test_multi_raid_no_io ...passed 00:06:32.545 Test: test_multi_raid_with_io ...passed 00:06:32.545 Test: test_io_type_supported ...passed 00:06:32.545 Test: test_raid_json_dump_info ...passed 00:06:32.545 Test: test_context_size ...passed 00:06:32.545 Test: test_raid_level_conversions ...passed 00:06:32.545 Test: test_raid_io_split ...passedTest Options 00:06:32.545 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 1 00:06:32.545 00:06:32.545 Suite: raid_dif 00:06:32.545 Test: test_create_raid ...passed 00:06:32.545 Test: test_create_raid_superblock ...passed 00:06:32.545 Test: test_delete_raid ...passed 00:06:32.545 Test: test_create_raid_invalid_args ...[2024-04-24 00:20:25.951363] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1487:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:32.545 [2024-04-24 00:20:25.951719] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:32.545 [2024-04-24 00:20:25.952139] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1471:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:32.545 [2024-04-24 00:20:25.952425] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:32.545 [2024-04-24 00:20:25.953159] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:32.545 passed 00:06:32.545 Test: test_delete_raid_invalid_args ...passed 00:06:32.545 Test: test_io_channel ...passed 00:06:32.545 Test: test_reset_io ...passed 00:06:32.545 Test: test_write_io ...passed 00:06:32.545 Test: test_read_io ...passed 00:06:33.480 Test: test_unmap_io ...passed 00:06:33.480 Test: test_io_failure ...[2024-04-24 00:20:27.117034] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 962:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:06:33.480 passed 00:06:33.480 Test: test_multi_raid_no_io ...passed 00:06:33.480 Test: test_multi_raid_with_io ...passed 00:06:33.480 Test: test_io_type_supported ...passed 00:06:33.480 Test: test_raid_json_dump_info ...passed 00:06:33.480 Test: test_context_size ...passed 00:06:33.480 Test: test_raid_level_conversions ...passed 00:06:33.480 Test: test_raid_io_split ...passedTest Options 00:06:33.480 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:06:33.480 00:06:33.480 Suite: raid_single_run 00:06:33.480 Test: test_raid_process ...passed 00:06:33.480 00:06:33.480 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.480 suites 3 3 n/a 0 0 00:06:33.480 tests 37 37 37 0 0 00:06:33.480 asserts 355354 355354 355354 0 n/a 00:06:33.480 00:06:33.480 Elapsed time = 2.369 seconds 00:06:33.480 00:20:27 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:33.480 00:06:33.480 00:06:33.480 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.480 http://cunit.sourceforge.net/ 00:06:33.480 00:06:33.480 00:06:33.480 Suite: raid_sb 00:06:33.480 Test: test_raid_bdev_write_superblock ...passed 00:06:33.480 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:33.480 Test: test_raid_bdev_parse_superblock ...[2024-04-24 00:20:27.186089] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 163:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:33.480 passed 00:06:33.480 Suite: raid_sb_md 00:06:33.480 Test: test_raid_bdev_write_superblock ...passed 00:06:33.480 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:33.480 Test: test_raid_bdev_parse_superblock ...[2024-04-24 00:20:27.187855] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 163:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:33.480 passed 00:06:33.480 Suite: raid_sb_md_interleaved 00:06:33.480 Test: test_raid_bdev_write_superblock ...passed 00:06:33.480 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:33.480 Test: test_raid_bdev_parse_superblock ...[2024-04-24 00:20:27.189202] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 163:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:33.480 passed 00:06:33.480 00:06:33.481 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.481 suites 3 3 n/a 0 0 00:06:33.481 tests 9 9 9 0 0 00:06:33.481 asserts 136 136 136 0 n/a 00:06:33.481 00:06:33.481 Elapsed time = 0.003 seconds 00:06:33.481 00:20:27 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:33.481 00:06:33.481 00:06:33.481 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.481 http://cunit.sourceforge.net/ 00:06:33.481 00:06:33.481 00:06:33.481 Suite: concat 00:06:33.481 Test: test_concat_start ...passed 00:06:33.481 Test: test_concat_rw ...passed 00:06:33.481 Test: test_concat_null_payload ...passed 00:06:33.481 00:06:33.481 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.481 suites 1 1 n/a 0 0 00:06:33.481 tests 3 3 3 0 0 00:06:33.481 asserts 8460 8460 8460 0 n/a 00:06:33.481 00:06:33.481 Elapsed time = 0.007 seconds 00:06:33.481 00:20:27 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:33.739 00:06:33.739 00:06:33.739 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.739 http://cunit.sourceforge.net/ 00:06:33.739 00:06:33.739 00:06:33.739 Suite: raid1 00:06:33.739 Test: test_raid1_start ...passed 00:06:33.739 Test: test_raid1_read_balancing ...passed 00:06:33.739 00:06:33.739 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.739 suites 1 1 n/a 0 0 00:06:33.739 tests 2 2 2 0 0 00:06:33.739 asserts 2880 2880 2880 0 n/a 00:06:33.739 00:06:33.739 Elapsed time = 0.004 seconds 00:06:33.739 00:20:27 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:33.739 00:06:33.739 00:06:33.739 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.739 http://cunit.sourceforge.net/ 00:06:33.739 00:06:33.739 00:06:33.739 Suite: zone 00:06:33.739 Test: test_zone_get_operation ...passed 00:06:33.739 Test: test_bdev_zone_get_info ...passed 00:06:33.739 Test: test_bdev_zone_management ...passed 00:06:33.739 Test: test_bdev_zone_append ...passed 00:06:33.739 Test: test_bdev_zone_append_with_md ...passed 00:06:33.739 Test: test_bdev_zone_appendv ...passed 00:06:33.739 Test: test_bdev_zone_appendv_with_md ...passed 00:06:33.739 Test: test_bdev_io_get_append_location ...passed 00:06:33.739 00:06:33.739 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.739 suites 1 1 n/a 0 0 00:06:33.739 tests 8 8 8 0 0 00:06:33.739 asserts 94 94 94 0 n/a 00:06:33.739 00:06:33.739 Elapsed time = 0.001 seconds 00:06:33.739 00:20:27 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:33.739 00:06:33.739 00:06:33.739 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.739 http://cunit.sourceforge.net/ 00:06:33.739 00:06:33.739 00:06:33.739 Suite: gpt_parse 00:06:33.739 Test: test_parse_mbr_and_primary ...[2024-04-24 00:20:27.357099] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:33.739 [2024-04-24 00:20:27.358067] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:33.739 [2024-04-24 00:20:27.358430] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:33.739 [2024-04-24 00:20:27.358791] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:33.739 [2024-04-24 00:20:27.359142] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:33.739 [2024-04-24 00:20:27.359510] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:33.739 passed 00:06:33.739 Test: test_parse_secondary ...[2024-04-24 00:20:27.360702] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:33.739 [2024-04-24 00:20:27.360979] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:33.739 [2024-04-24 00:20:27.361264] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:33.739 [2024-04-24 00:20:27.361553] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:33.739 passed 00:06:33.739 Test: test_check_mbr ...[2024-04-24 00:20:27.362726] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:33.739 [2024-04-24 00:20:27.363175] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:33.739 passed 00:06:33.739 Test: test_read_header ...[2024-04-24 00:20:27.363677] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:33.739 [2024-04-24 00:20:27.364015] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:33.739 [2024-04-24 00:20:27.364396] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:33.739 [2024-04-24 00:20:27.364692] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:33.739 [2024-04-24 00:20:27.364986] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:33.739 [2024-04-24 00:20:27.365259] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:33.739 passed 00:06:33.739 Test: test_read_partitions ...[2024-04-24 00:20:27.365768] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:33.739 [2024-04-24 00:20:27.366049] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:33.739 [2024-04-24 00:20:27.366331] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:33.739 [2024-04-24 00:20:27.366594] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:33.739 [2024-04-24 00:20:27.367223] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:33.739 passed 00:06:33.739 00:06:33.739 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.739 suites 1 1 n/a 0 0 00:06:33.739 tests 5 5 5 0 0 00:06:33.739 asserts 33 33 33 0 n/a 00:06:33.739 00:06:33.739 Elapsed time = 0.005 seconds 00:06:33.739 00:20:27 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:33.739 00:06:33.739 00:06:33.739 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.739 http://cunit.sourceforge.net/ 00:06:33.739 00:06:33.739 00:06:33.739 Suite: bdev_part 00:06:33.739 Test: part_test ...[2024-04-24 00:20:27.401393] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:06:33.739 passed 00:06:33.739 Test: part_free_test ...passed 00:06:33.739 Test: part_get_io_channel_test ...passed 00:06:33.739 Test: part_construct_ext ...passed 00:06:33.739 00:06:33.739 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.739 suites 1 1 n/a 0 0 00:06:33.739 tests 4 4 4 0 0 00:06:33.739 asserts 48 48 48 0 n/a 00:06:33.739 00:06:33.739 Elapsed time = 0.054 seconds 00:06:33.739 00:20:27 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:33.739 00:06:33.740 00:06:33.740 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.740 http://cunit.sourceforge.net/ 00:06:33.740 00:06:33.740 00:06:33.740 Suite: scsi_nvme_suite 00:06:33.740 Test: scsi_nvme_translate_test ...passed 00:06:33.740 00:06:33.740 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.740 suites 1 1 n/a 0 0 00:06:33.740 tests 1 1 1 0 0 00:06:33.740 asserts 104 104 104 0 n/a 00:06:33.740 00:06:33.740 Elapsed time = 0.000 seconds 00:06:33.740 00:20:27 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:33.998 00:06:33.998 00:06:33.998 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.998 http://cunit.sourceforge.net/ 00:06:33.998 00:06:33.998 00:06:33.998 Suite: lvol 00:06:33.998 Test: ut_lvs_init ...[2024-04-24 00:20:27.537200] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:33.998 passed 00:06:33.998 Test: ut_lvol_init ...[2024-04-24 00:20:27.537725] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:33.998 passed 00:06:33.998 Test: ut_lvol_snapshot ...passed 00:06:33.998 Test: ut_lvol_clone ...passed 00:06:33.998 Test: ut_lvs_destroy ...passed 00:06:33.998 Test: ut_lvs_unload ...passed 00:06:33.998 Test: ut_lvol_resize ...[2024-04-24 00:20:27.539182] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:33.998 passed 00:06:33.998 Test: ut_lvol_set_read_only ...passed 00:06:33.998 Test: ut_lvol_hotremove ...passed 00:06:33.998 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:33.998 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:33.998 Test: ut_lvol_read_write ...passed 00:06:33.998 Test: ut_vbdev_lvol_submit_request ...passed 00:06:33.998 Test: ut_lvol_examine_config ...passed 00:06:33.998 Test: ut_lvol_examine_disk ...[2024-04-24 00:20:27.539896] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:33.998 passed 00:06:33.998 Test: ut_lvol_rename ...[2024-04-24 00:20:27.540939] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:33.998 [2024-04-24 00:20:27.541042] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:33.998 passed 00:06:33.998 Test: ut_bdev_finish ...passed 00:06:33.998 Test: ut_lvs_rename ...passed 00:06:33.998 Test: ut_lvol_seek ...passed 00:06:33.998 Test: ut_esnap_dev_create ...[2024-04-24 00:20:27.541799] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:33.998 [2024-04-24 00:20:27.541886] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:33.998 [2024-04-24 00:20:27.541916] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:33.998 [2024-04-24 00:20:27.541968] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:06:33.998 passed 00:06:33.998 Test: ut_lvol_esnap_clone_bad_args ...passed 00:06:33.998 00:06:33.998 [2024-04-24 00:20:27.542088] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:33.998 [2024-04-24 00:20:27.542122] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:06:33.998 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.998 suites 1 1 n/a 0 0 00:06:33.998 tests 21 21 21 0 0 00:06:33.998 asserts 758 758 758 0 n/a 00:06:33.999 00:06:33.999 Elapsed time = 0.005 seconds 00:06:33.999 00:20:27 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:33.999 00:06:33.999 00:06:33.999 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.999 http://cunit.sourceforge.net/ 00:06:33.999 00:06:33.999 00:06:33.999 Suite: zone_block 00:06:33.999 Test: test_zone_block_create ...passed 00:06:33.999 Test: test_zone_block_create_invalid ...[2024-04-24 00:20:27.612185] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:33.999 [2024-04-24 00:20:27.612575] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-24 00:20:27.612795] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:33.999 [2024-04-24 00:20:27.612875] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-24 00:20:27.613067] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:33.999 [2024-04-24 00:20:27.613102] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-04-24 00:20:27.613219] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:33.999 [2024-04-24 00:20:27.613282] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:33.999 Test: test_get_zone_info ...[2024-04-24 00:20:27.613953] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.614042] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.614115] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 passed 00:06:33.999 Test: test_supported_io_types ...passed 00:06:33.999 Test: test_reset_zone ...[2024-04-24 00:20:27.615142] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.615212] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 passed 00:06:33.999 Test: test_open_zone ...[2024-04-24 00:20:27.615741] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.616476] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 passed 00:06:33.999 Test: test_zone_write ...[2024-04-24 00:20:27.616569] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.617094] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:33.999 [2024-04-24 00:20:27.617163] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.617238] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:33.999 [2024-04-24 00:20:27.617299] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.624118] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:33.999 [2024-04-24 00:20:27.624194] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.624305] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:33.999 [2024-04-24 00:20:27.624340] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.631236] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:33.999 passed 00:06:33.999 Test: test_zone_read ...[2024-04-24 00:20:27.631343] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.631907] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:33.999 [2024-04-24 00:20:27.631960] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.632055] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:33.999 [2024-04-24 00:20:27.632103] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.632642] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:33.999 [2024-04-24 00:20:27.632682] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 passed 00:06:33.999 Test: test_close_zone ...[2024-04-24 00:20:27.633069] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.633165] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.633423] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.633484] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 passed 00:06:33.999 Test: test_finish_zone ...[2024-04-24 00:20:27.634199] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.634263] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 passed 00:06:33.999 Test: test_append_zone ...[2024-04-24 00:20:27.634749] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:33.999 [2024-04-24 00:20:27.634809] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.634885] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:33.999 [2024-04-24 00:20:27.634917] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 [2024-04-24 00:20:27.649135] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:33.999 [2024-04-24 00:20:27.649241] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.999 passed 00:06:33.999 00:06:33.999 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.999 suites 1 1 n/a 0 0 00:06:33.999 tests 11 11 11 0 0 00:06:33.999 asserts 3437 3437 3437 0 n/a 00:06:33.999 00:06:33.999 Elapsed time = 0.039 seconds 00:06:33.999 00:20:27 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:33.999 00:06:33.999 00:06:33.999 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.999 http://cunit.sourceforge.net/ 00:06:33.999 00:06:33.999 00:06:33.999 Suite: bdev 00:06:33.999 Test: basic ...[2024-04-24 00:20:27.772404] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x556673626e21): Operation not permitted (rc=-1) 00:06:33.999 [2024-04-24 00:20:27.772883] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x556673626de0): Operation not permitted (rc=-1) 00:06:33.999 [2024-04-24 00:20:27.772959] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x556673626e21): Operation not permitted (rc=-1) 00:06:34.258 passed 00:06:34.258 Test: unregister_and_close ...passed 00:06:34.258 Test: unregister_and_close_different_threads ...passed 00:06:34.258 Test: basic_qos ...passed 00:06:34.258 Test: put_channel_during_reset ...passed 00:06:34.516 Test: aborted_reset ...passed 00:06:34.516 Test: aborted_reset_no_outstanding_io ...passed 00:06:34.516 Test: io_during_reset ...passed 00:06:34.516 Test: reset_completions ...passed 00:06:34.793 Test: io_during_qos_queue ...passed 00:06:34.793 Test: io_during_qos_reset ...passed 00:06:34.793 Test: enomem ...passed 00:06:34.793 Test: enomem_multi_bdev ...passed 00:06:34.793 Test: enomem_multi_bdev_unregister ...passed 00:06:35.057 Test: enomem_multi_io_target ...passed 00:06:35.057 Test: qos_dynamic_enable ...passed 00:06:35.057 Test: bdev_histograms_mt ...passed 00:06:35.057 Test: bdev_set_io_timeout_mt ...[2024-04-24 00:20:28.788322] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:35.057 passed 00:06:35.058 Test: lock_lba_range_then_submit_io ...[2024-04-24 00:20:28.814826] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x556673626da0 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:35.315 passed 00:06:35.315 Test: unregister_during_reset ...passed 00:06:35.315 Test: event_notify_and_close ...passed 00:06:35.315 Suite: bdev_wrong_thread 00:06:35.315 Test: spdk_bdev_register_wt ...[2024-04-24 00:20:28.962661] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8412:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x618000000880 (0x618000000880) 00:06:35.315 passed 00:06:35.315 Test: spdk_bdev_examine_wt ...passed[2024-04-24 00:20:28.963107] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 791:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000000880 (0x618000000880) 00:06:35.315 00:06:35.315 00:06:35.315 Run Summary: Type Total Ran Passed Failed Inactive 00:06:35.315 suites 2 2 n/a 0 0 00:06:35.315 tests 23 23 23 0 0 00:06:35.315 asserts 601 601 601 0 n/a 00:06:35.315 00:06:35.315 Elapsed time = 1.229 seconds 00:06:35.315 00:06:35.315 real 0m6.534s 00:06:35.315 user 0m2.707s 00:06:35.315 sys 0m3.796s 00:06:35.315 00:20:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.315 00:20:28 -- common/autotest_common.sh@10 -- # set +x 00:06:35.315 ************************************ 00:06:35.315 END TEST unittest_bdev 00:06:35.315 ************************************ 00:06:35.315 00:20:29 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:35.315 00:20:29 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:35.315 00:20:29 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:35.315 00:20:29 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:35.315 00:20:29 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:35.315 00:20:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.315 00:20:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.315 00:20:29 -- common/autotest_common.sh@10 -- # set +x 00:06:35.572 ************************************ 00:06:35.572 START TEST unittest_bdev_raid5f 00:06:35.572 ************************************ 00:06:35.572 00:20:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:35.572 00:06:35.572 00:06:35.572 CUnit - A unit testing framework for C - Version 2.1-3 00:06:35.572 http://cunit.sourceforge.net/ 00:06:35.572 00:06:35.572 00:06:35.572 Suite: raid5f 00:06:35.572 Test: test_raid5f_start ...passed 00:06:35.830 Test: test_raid5f_submit_read_request ...passed 00:06:36.258 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:40.491 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:07:02.415 Test: test_raid5f_chunk_write_error ...passed 00:07:08.966 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:07:12.246 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:44.341 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:44.341 00:07:44.341 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.341 suites 1 1 n/a 0 0 00:07:44.341 tests 8 8 8 0 0 00:07:44.341 asserts 352392 352392 352392 0 n/a 00:07:44.341 00:07:44.341 Elapsed time = 66.613 seconds 00:07:44.341 00:07:44.341 real 1m6.715s 00:07:44.341 user 1m2.855s 00:07:44.341 sys 0m3.848s 00:07:44.341 00:21:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.341 00:21:35 -- common/autotest_common.sh@10 -- # set +x 00:07:44.341 ************************************ 00:07:44.341 END TEST unittest_bdev_raid5f 00:07:44.341 ************************************ 00:07:44.341 00:21:35 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:07:44.341 00:21:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:44.341 00:21:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.341 00:21:35 -- common/autotest_common.sh@10 -- # set +x 00:07:44.341 ************************************ 00:07:44.341 START TEST unittest_blob_blobfs 00:07:44.341 ************************************ 00:07:44.341 00:21:35 -- common/autotest_common.sh@1111 -- # unittest_blob 00:07:44.341 00:21:35 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:44.341 00:21:35 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:44.341 00:07:44.341 00:07:44.341 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.341 http://cunit.sourceforge.net/ 00:07:44.341 00:07:44.341 00:07:44.341 Suite: blob_nocopy_noextent 00:07:44.341 Test: blob_init ...[2024-04-24 00:21:35.959293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:44.341 passed 00:07:44.341 Test: blob_thin_provision ...passed 00:07:44.341 Test: blob_read_only ...passed 00:07:44.341 Test: bs_load ...[2024-04-24 00:21:36.052605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:44.341 passed 00:07:44.341 Test: bs_load_custom_cluster_size ...passed 00:07:44.341 Test: bs_load_after_failed_grow ...passed 00:07:44.341 Test: bs_cluster_sz ...[2024-04-24 00:21:36.080547] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:44.341 [2024-04-24 00:21:36.080932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:44.341 [2024-04-24 00:21:36.081089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:44.341 passed 00:07:44.341 Test: bs_resize_md ...passed 00:07:44.341 Test: bs_destroy ...passed 00:07:44.341 Test: bs_type ...passed 00:07:44.341 Test: bs_super_block ...passed 00:07:44.341 Test: bs_test_recover_cluster_count ...passed 00:07:44.341 Test: bs_grow_live ...passed 00:07:44.341 Test: bs_grow_live_no_space ...passed 00:07:44.341 Test: bs_test_grow ...passed 00:07:44.341 Test: blob_serialize_test ...passed 00:07:44.341 Test: super_block_crc ...passed 00:07:44.341 Test: blob_thin_prov_write_count_io ...passed 00:07:44.341 Test: blob_thin_prov_unmap_cluster ...passed 00:07:44.341 Test: bs_load_iter_test ...passed 00:07:44.341 Test: blob_relations ...[2024-04-24 00:21:36.274816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:44.341 [2024-04-24 00:21:36.274943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.341 [2024-04-24 00:21:36.275918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:44.341 [2024-04-24 00:21:36.275987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.341 passed 00:07:44.341 Test: blob_relations2 ...[2024-04-24 00:21:36.290489] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:44.341 [2024-04-24 00:21:36.290616] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.341 [2024-04-24 00:21:36.290652] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:44.341 [2024-04-24 00:21:36.290680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.341 [2024-04-24 00:21:36.292069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:44.341 [2024-04-24 00:21:36.292135] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.341 [2024-04-24 00:21:36.292527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:44.341 [2024-04-24 00:21:36.292586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.341 passed 00:07:44.341 Test: blob_relations3 ...passed 00:07:44.342 Test: blobstore_clean_power_failure ...passed 00:07:44.342 Test: blob_delete_snapshot_power_failure ...[2024-04-24 00:21:36.446118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:44.342 [2024-04-24 00:21:36.458154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:44.342 [2024-04-24 00:21:36.458258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:44.342 [2024-04-24 00:21:36.458293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.342 [2024-04-24 00:21:36.470376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:44.342 [2024-04-24 00:21:36.470468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:44.342 [2024-04-24 00:21:36.470505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:44.342 [2024-04-24 00:21:36.470564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.342 [2024-04-24 00:21:36.482543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:44.342 [2024-04-24 00:21:36.482667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.342 [2024-04-24 00:21:36.494782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:44.342 [2024-04-24 00:21:36.494939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.342 [2024-04-24 00:21:36.507098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:44.342 [2024-04-24 00:21:36.507213] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.342 passed 00:07:44.342 Test: blob_create_snapshot_power_failure ...[2024-04-24 00:21:36.543689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:44.342 [2024-04-24 00:21:36.567355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:44.342 [2024-04-24 00:21:36.579579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:44.342 passed 00:07:44.342 Test: blob_io_unit ...passed 00:07:44.342 Test: blob_io_unit_compatibility ...passed 00:07:44.342 Test: blob_ext_md_pages ...passed 00:07:44.342 Test: blob_esnap_io_4096_4096 ...passed 00:07:44.342 Test: blob_esnap_io_512_512 ...passed 00:07:44.342 Test: blob_esnap_io_4096_512 ...passed 00:07:44.342 Test: blob_esnap_io_512_4096 ...passed 00:07:44.342 Suite: blob_bs_nocopy_noextent 00:07:44.342 Test: blob_open ...passed 00:07:44.342 Test: blob_create ...[2024-04-24 00:21:36.819456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:44.342 passed 00:07:44.342 Test: blob_create_loop ...passed 00:07:44.342 Test: blob_create_fail ...[2024-04-24 00:21:36.910964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:44.342 passed 00:07:44.342 Test: blob_create_internal ...passed 00:07:44.342 Test: blob_create_zero_extent ...passed 00:07:44.342 Test: blob_snapshot ...passed 00:07:44.342 Test: blob_clone ...passed 00:07:44.342 Test: blob_inflate ...[2024-04-24 00:21:37.086749] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:44.342 passed 00:07:44.342 Test: blob_delete ...passed 00:07:44.342 Test: blob_resize_test ...[2024-04-24 00:21:37.149677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:44.342 passed 00:07:44.342 Test: channel_ops ...passed 00:07:44.342 Test: blob_super ...passed 00:07:44.342 Test: blob_rw_verify_iov ...passed 00:07:44.342 Test: blob_unmap ...passed 00:07:44.342 Test: blob_iter ...passed 00:07:44.342 Test: blob_parse_md ...passed 00:07:44.342 Test: bs_load_pending_removal ...passed 00:07:44.342 Test: bs_unload ...[2024-04-24 00:21:37.405213] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:44.342 passed 00:07:44.342 Test: bs_usable_clusters ...passed 00:07:44.342 Test: blob_crc ...[2024-04-24 00:21:37.468960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:44.342 [2024-04-24 00:21:37.469071] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:44.342 passed 00:07:44.342 Test: blob_flags ...passed 00:07:44.342 Test: bs_version ...passed 00:07:44.342 Test: blob_set_xattrs_test ...[2024-04-24 00:21:37.565639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:44.342 [2024-04-24 00:21:37.565735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:44.342 passed 00:07:44.342 Test: blob_thin_prov_alloc ...passed 00:07:44.342 Test: blob_insert_cluster_msg_test ...passed 00:07:44.342 Test: blob_thin_prov_rw ...passed 00:07:44.342 Test: blob_thin_prov_rle ...passed 00:07:44.342 Test: blob_thin_prov_rw_iov ...passed 00:07:44.342 Test: blob_snapshot_rw ...passed 00:07:44.342 Test: blob_snapshot_rw_iov ...passed 00:07:44.602 Test: blob_inflate_rw ...passed 00:07:44.602 Test: blob_snapshot_freeze_io ...passed 00:07:44.602 Test: blob_operation_split_rw ...passed 00:07:44.860 Test: blob_operation_split_rw_iov ...passed 00:07:44.860 Test: blob_simultaneous_operations ...[2024-04-24 00:21:38.490028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:44.860 [2024-04-24 00:21:38.490132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.860 [2024-04-24 00:21:38.491315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:44.860 [2024-04-24 00:21:38.491366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.860 [2024-04-24 00:21:38.503111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:44.860 [2024-04-24 00:21:38.503178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.860 [2024-04-24 00:21:38.503288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:44.860 [2024-04-24 00:21:38.503315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:44.860 passed 00:07:44.860 Test: blob_persist_test ...passed 00:07:44.860 Test: blob_decouple_snapshot ...passed 00:07:45.119 Test: blob_seek_io_unit ...passed 00:07:45.119 Test: blob_nested_freezes ...passed 00:07:45.119 Suite: blob_blob_nocopy_noextent 00:07:45.119 Test: blob_write ...passed 00:07:45.119 Test: blob_read ...passed 00:07:45.119 Test: blob_rw_verify ...passed 00:07:45.119 Test: blob_rw_verify_iov_nomem ...passed 00:07:45.119 Test: blob_rw_iov_read_only ...passed 00:07:45.119 Test: blob_xattr ...passed 00:07:45.378 Test: blob_dirty_shutdown ...passed 00:07:45.378 Test: blob_is_degraded ...passed 00:07:45.378 Suite: blob_esnap_bs_nocopy_noextent 00:07:45.378 Test: blob_esnap_create ...passed 00:07:45.378 Test: blob_esnap_thread_add_remove ...passed 00:07:45.378 Test: blob_esnap_clone_snapshot ...passed 00:07:45.378 Test: blob_esnap_clone_inflate ...passed 00:07:45.378 Test: blob_esnap_clone_decouple ...passed 00:07:45.378 Test: blob_esnap_clone_reload ...passed 00:07:45.637 Test: blob_esnap_hotplug ...passed 00:07:45.637 Suite: blob_nocopy_extent 00:07:45.637 Test: blob_init ...[2024-04-24 00:21:39.168884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:45.637 passed 00:07:45.637 Test: blob_thin_provision ...passed 00:07:45.637 Test: blob_read_only ...passed 00:07:45.638 Test: bs_load ...[2024-04-24 00:21:39.211962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:45.638 passed 00:07:45.638 Test: bs_load_custom_cluster_size ...passed 00:07:45.638 Test: bs_load_after_failed_grow ...passed 00:07:45.638 Test: bs_cluster_sz ...[2024-04-24 00:21:39.235936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:45.638 [2024-04-24 00:21:39.236230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:45.638 [2024-04-24 00:21:39.236295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:45.638 passed 00:07:45.638 Test: bs_resize_md ...passed 00:07:45.638 Test: bs_destroy ...passed 00:07:45.638 Test: bs_type ...passed 00:07:45.638 Test: bs_super_block ...passed 00:07:45.638 Test: bs_test_recover_cluster_count ...passed 00:07:45.638 Test: bs_grow_live ...passed 00:07:45.638 Test: bs_grow_live_no_space ...passed 00:07:45.638 Test: bs_test_grow ...passed 00:07:45.638 Test: blob_serialize_test ...passed 00:07:45.638 Test: super_block_crc ...passed 00:07:45.638 Test: blob_thin_prov_write_count_io ...passed 00:07:45.638 Test: blob_thin_prov_unmap_cluster ...passed 00:07:45.638 Test: bs_load_iter_test ...passed 00:07:45.638 Test: blob_relations ...[2024-04-24 00:21:39.404409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:45.638 [2024-04-24 00:21:39.404550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.638 [2024-04-24 00:21:39.405416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:45.638 [2024-04-24 00:21:39.405474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.638 passed 00:07:45.638 Test: blob_relations2 ...[2024-04-24 00:21:39.418951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:45.638 [2024-04-24 00:21:39.419050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.638 [2024-04-24 00:21:39.419086] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:45.638 [2024-04-24 00:21:39.419114] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.638 [2024-04-24 00:21:39.420373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:45.638 [2024-04-24 00:21:39.420441] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.638 [2024-04-24 00:21:39.420799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:45.638 [2024-04-24 00:21:39.420841] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.638 passed 00:07:45.897 Test: blob_relations3 ...passed 00:07:45.897 Test: blobstore_clean_power_failure ...passed 00:07:45.897 Test: blob_delete_snapshot_power_failure ...[2024-04-24 00:21:39.568170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:45.897 [2024-04-24 00:21:39.579966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:45.897 [2024-04-24 00:21:39.591893] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:45.897 [2024-04-24 00:21:39.591978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:45.897 [2024-04-24 00:21:39.592015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.897 [2024-04-24 00:21:39.603963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:45.897 [2024-04-24 00:21:39.604049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:45.897 [2024-04-24 00:21:39.604085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:45.897 [2024-04-24 00:21:39.604112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.897 [2024-04-24 00:21:39.616073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:45.897 [2024-04-24 00:21:39.616156] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:45.897 [2024-04-24 00:21:39.616188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:45.897 [2024-04-24 00:21:39.616219] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.897 [2024-04-24 00:21:39.627986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:45.897 [2024-04-24 00:21:39.628094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.897 [2024-04-24 00:21:39.639767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:45.897 [2024-04-24 00:21:39.639875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.897 [2024-04-24 00:21:39.651810] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:45.897 [2024-04-24 00:21:39.651905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:45.897 passed 00:07:46.156 Test: blob_create_snapshot_power_failure ...[2024-04-24 00:21:39.686912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:46.156 [2024-04-24 00:21:39.698339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:46.156 [2024-04-24 00:21:39.721073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:46.156 [2024-04-24 00:21:39.732962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:46.156 passed 00:07:46.156 Test: blob_io_unit ...passed 00:07:46.156 Test: blob_io_unit_compatibility ...passed 00:07:46.156 Test: blob_ext_md_pages ...passed 00:07:46.156 Test: blob_esnap_io_4096_4096 ...passed 00:07:46.156 Test: blob_esnap_io_512_512 ...passed 00:07:46.156 Test: blob_esnap_io_4096_512 ...passed 00:07:46.156 Test: blob_esnap_io_512_4096 ...passed 00:07:46.156 Suite: blob_bs_nocopy_extent 00:07:46.156 Test: blob_open ...passed 00:07:46.413 Test: blob_create ...[2024-04-24 00:21:39.960095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:46.413 passed 00:07:46.414 Test: blob_create_loop ...passed 00:07:46.414 Test: blob_create_fail ...[2024-04-24 00:21:40.056886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:46.414 passed 00:07:46.414 Test: blob_create_internal ...passed 00:07:46.414 Test: blob_create_zero_extent ...passed 00:07:46.414 Test: blob_snapshot ...passed 00:07:46.414 Test: blob_clone ...passed 00:07:46.671 Test: blob_inflate ...[2024-04-24 00:21:40.231346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:46.671 passed 00:07:46.671 Test: blob_delete ...passed 00:07:46.671 Test: blob_resize_test ...[2024-04-24 00:21:40.295413] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:46.671 passed 00:07:46.671 Test: channel_ops ...passed 00:07:46.671 Test: blob_super ...passed 00:07:46.671 Test: blob_rw_verify_iov ...passed 00:07:46.671 Test: blob_unmap ...passed 00:07:46.930 Test: blob_iter ...passed 00:07:46.930 Test: blob_parse_md ...passed 00:07:46.930 Test: bs_load_pending_removal ...passed 00:07:46.930 Test: bs_unload ...[2024-04-24 00:21:40.545780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:46.930 passed 00:07:46.930 Test: bs_usable_clusters ...passed 00:07:46.930 Test: blob_crc ...[2024-04-24 00:21:40.609419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:46.930 [2024-04-24 00:21:40.609566] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:46.930 passed 00:07:46.930 Test: blob_flags ...passed 00:07:46.930 Test: bs_version ...passed 00:07:46.930 Test: blob_set_xattrs_test ...[2024-04-24 00:21:40.704271] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:46.930 [2024-04-24 00:21:40.704375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:46.930 passed 00:07:47.188 Test: blob_thin_prov_alloc ...passed 00:07:47.188 Test: blob_insert_cluster_msg_test ...passed 00:07:47.188 Test: blob_thin_prov_rw ...passed 00:07:47.188 Test: blob_thin_prov_rle ...passed 00:07:47.188 Test: blob_thin_prov_rw_iov ...passed 00:07:47.446 Test: blob_snapshot_rw ...passed 00:07:47.446 Test: blob_snapshot_rw_iov ...passed 00:07:47.704 Test: blob_inflate_rw ...passed 00:07:47.704 Test: blob_snapshot_freeze_io ...passed 00:07:47.704 Test: blob_operation_split_rw ...passed 00:07:47.962 Test: blob_operation_split_rw_iov ...passed 00:07:47.962 Test: blob_simultaneous_operations ...[2024-04-24 00:21:41.629505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.962 [2024-04-24 00:21:41.629604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.962 [2024-04-24 00:21:41.630776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.962 [2024-04-24 00:21:41.630824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.962 [2024-04-24 00:21:41.643644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.962 [2024-04-24 00:21:41.643718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.962 [2024-04-24 00:21:41.643820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:47.963 [2024-04-24 00:21:41.643837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:47.963 passed 00:07:47.963 Test: blob_persist_test ...passed 00:07:48.222 Test: blob_decouple_snapshot ...passed 00:07:48.222 Test: blob_seek_io_unit ...passed 00:07:48.222 Test: blob_nested_freezes ...passed 00:07:48.222 Suite: blob_blob_nocopy_extent 00:07:48.222 Test: blob_write ...passed 00:07:48.222 Test: blob_read ...passed 00:07:48.222 Test: blob_rw_verify ...passed 00:07:48.222 Test: blob_rw_verify_iov_nomem ...passed 00:07:48.222 Test: blob_rw_iov_read_only ...passed 00:07:48.480 Test: blob_xattr ...passed 00:07:48.480 Test: blob_dirty_shutdown ...passed 00:07:48.480 Test: blob_is_degraded ...passed 00:07:48.480 Suite: blob_esnap_bs_nocopy_extent 00:07:48.480 Test: blob_esnap_create ...passed 00:07:48.480 Test: blob_esnap_thread_add_remove ...passed 00:07:48.480 Test: blob_esnap_clone_snapshot ...passed 00:07:48.480 Test: blob_esnap_clone_inflate ...passed 00:07:48.480 Test: blob_esnap_clone_decouple ...passed 00:07:48.739 Test: blob_esnap_clone_reload ...passed 00:07:48.739 Test: blob_esnap_hotplug ...passed 00:07:48.739 Suite: blob_copy_noextent 00:07:48.739 Test: blob_init ...[2024-04-24 00:21:42.310367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:48.739 passed 00:07:48.739 Test: blob_thin_provision ...passed 00:07:48.739 Test: blob_read_only ...passed 00:07:48.739 Test: bs_load ...[2024-04-24 00:21:42.353702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:48.739 passed 00:07:48.739 Test: bs_load_custom_cluster_size ...passed 00:07:48.739 Test: bs_load_after_failed_grow ...passed 00:07:48.739 Test: bs_cluster_sz ...[2024-04-24 00:21:42.376745] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:48.739 [2024-04-24 00:21:42.376932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:48.739 [2024-04-24 00:21:42.376980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:48.739 passed 00:07:48.739 Test: bs_resize_md ...passed 00:07:48.739 Test: bs_destroy ...passed 00:07:48.739 Test: bs_type ...passed 00:07:48.739 Test: bs_super_block ...passed 00:07:48.739 Test: bs_test_recover_cluster_count ...passed 00:07:48.739 Test: bs_grow_live ...passed 00:07:48.739 Test: bs_grow_live_no_space ...passed 00:07:48.739 Test: bs_test_grow ...passed 00:07:48.739 Test: blob_serialize_test ...passed 00:07:48.739 Test: super_block_crc ...passed 00:07:48.739 Test: blob_thin_prov_write_count_io ...passed 00:07:49.003 Test: blob_thin_prov_unmap_cluster ...passed 00:07:49.003 Test: bs_load_iter_test ...passed 00:07:49.003 Test: blob_relations ...[2024-04-24 00:21:42.552341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:49.003 [2024-04-24 00:21:42.552444] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.003 [2024-04-24 00:21:42.552966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:49.003 [2024-04-24 00:21:42.552998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.003 passed 00:07:49.003 Test: blob_relations2 ...[2024-04-24 00:21:42.566006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:49.003 [2024-04-24 00:21:42.566090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.003 [2024-04-24 00:21:42.566115] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:49.003 [2024-04-24 00:21:42.566129] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.003 [2024-04-24 00:21:42.567004] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:49.003 [2024-04-24 00:21:42.567058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.003 [2024-04-24 00:21:42.567309] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:49.003 [2024-04-24 00:21:42.567342] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.003 passed 00:07:49.003 Test: blob_relations3 ...passed 00:07:49.003 Test: blobstore_clean_power_failure ...passed 00:07:49.003 Test: blob_delete_snapshot_power_failure ...[2024-04-24 00:21:42.716122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:49.003 [2024-04-24 00:21:42.730915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:49.003 [2024-04-24 00:21:42.731019] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:49.003 [2024-04-24 00:21:42.731042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.003 [2024-04-24 00:21:42.742357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:49.003 [2024-04-24 00:21:42.742435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:49.003 [2024-04-24 00:21:42.742456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:49.003 [2024-04-24 00:21:42.742478] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.003 [2024-04-24 00:21:42.753839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:49.003 [2024-04-24 00:21:42.753944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.003 [2024-04-24 00:21:42.765425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:49.003 [2024-04-24 00:21:42.765530] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.003 [2024-04-24 00:21:42.777127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:49.003 [2024-04-24 00:21:42.777217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:49.263 passed 00:07:49.263 Test: blob_create_snapshot_power_failure ...[2024-04-24 00:21:42.811437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:49.263 [2024-04-24 00:21:42.833809] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:49.263 [2024-04-24 00:21:42.845309] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:49.263 passed 00:07:49.263 Test: blob_io_unit ...passed 00:07:49.263 Test: blob_io_unit_compatibility ...passed 00:07:49.263 Test: blob_ext_md_pages ...passed 00:07:49.263 Test: blob_esnap_io_4096_4096 ...passed 00:07:49.263 Test: blob_esnap_io_512_512 ...passed 00:07:49.263 Test: blob_esnap_io_4096_512 ...passed 00:07:49.263 Test: blob_esnap_io_512_4096 ...passed 00:07:49.263 Suite: blob_bs_copy_noextent 00:07:49.522 Test: blob_open ...passed 00:07:49.522 Test: blob_create ...[2024-04-24 00:21:43.074501] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:49.522 passed 00:07:49.522 Test: blob_create_loop ...passed 00:07:49.522 Test: blob_create_fail ...[2024-04-24 00:21:43.163330] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:49.522 passed 00:07:49.522 Test: blob_create_internal ...passed 00:07:49.522 Test: blob_create_zero_extent ...passed 00:07:49.522 Test: blob_snapshot ...passed 00:07:49.522 Test: blob_clone ...passed 00:07:49.790 Test: blob_inflate ...[2024-04-24 00:21:43.326953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:49.790 passed 00:07:49.790 Test: blob_delete ...passed 00:07:49.790 Test: blob_resize_test ...[2024-04-24 00:21:43.390075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:49.790 passed 00:07:49.790 Test: channel_ops ...passed 00:07:49.790 Test: blob_super ...passed 00:07:49.790 Test: blob_rw_verify_iov ...passed 00:07:49.790 Test: blob_unmap ...passed 00:07:49.790 Test: blob_iter ...passed 00:07:50.051 Test: blob_parse_md ...passed 00:07:50.051 Test: bs_load_pending_removal ...passed 00:07:50.051 Test: bs_unload ...[2024-04-24 00:21:43.640295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:50.051 passed 00:07:50.051 Test: bs_usable_clusters ...passed 00:07:50.051 Test: blob_crc ...[2024-04-24 00:21:43.703728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:50.051 [2024-04-24 00:21:43.703833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:50.051 passed 00:07:50.051 Test: blob_flags ...passed 00:07:50.051 Test: bs_version ...passed 00:07:50.051 Test: blob_set_xattrs_test ...[2024-04-24 00:21:43.799605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:50.051 [2024-04-24 00:21:43.799734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:50.051 passed 00:07:50.310 Test: blob_thin_prov_alloc ...passed 00:07:50.310 Test: blob_insert_cluster_msg_test ...passed 00:07:50.310 Test: blob_thin_prov_rw ...passed 00:07:50.310 Test: blob_thin_prov_rle ...passed 00:07:50.310 Test: blob_thin_prov_rw_iov ...passed 00:07:50.568 Test: blob_snapshot_rw ...passed 00:07:50.569 Test: blob_snapshot_rw_iov ...passed 00:07:50.827 Test: blob_inflate_rw ...passed 00:07:50.827 Test: blob_snapshot_freeze_io ...passed 00:07:50.827 Test: blob_operation_split_rw ...passed 00:07:51.087 Test: blob_operation_split_rw_iov ...passed 00:07:51.087 Test: blob_simultaneous_operations ...[2024-04-24 00:21:44.744363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:51.087 [2024-04-24 00:21:44.744465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.087 [2024-04-24 00:21:44.744900] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:51.087 [2024-04-24 00:21:44.744946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.087 [2024-04-24 00:21:44.747539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:51.087 [2024-04-24 00:21:44.747596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.087 [2024-04-24 00:21:44.747689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:51.087 [2024-04-24 00:21:44.747706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.087 passed 00:07:51.087 Test: blob_persist_test ...passed 00:07:51.087 Test: blob_decouple_snapshot ...passed 00:07:51.345 Test: blob_seek_io_unit ...passed 00:07:51.345 Test: blob_nested_freezes ...passed 00:07:51.345 Suite: blob_blob_copy_noextent 00:07:51.345 Test: blob_write ...passed 00:07:51.345 Test: blob_read ...passed 00:07:51.345 Test: blob_rw_verify ...passed 00:07:51.345 Test: blob_rw_verify_iov_nomem ...passed 00:07:51.345 Test: blob_rw_iov_read_only ...passed 00:07:51.345 Test: blob_xattr ...passed 00:07:51.604 Test: blob_dirty_shutdown ...passed 00:07:51.604 Test: blob_is_degraded ...passed 00:07:51.604 Suite: blob_esnap_bs_copy_noextent 00:07:51.604 Test: blob_esnap_create ...passed 00:07:51.604 Test: blob_esnap_thread_add_remove ...passed 00:07:51.604 Test: blob_esnap_clone_snapshot ...passed 00:07:51.604 Test: blob_esnap_clone_inflate ...passed 00:07:51.604 Test: blob_esnap_clone_decouple ...passed 00:07:51.604 Test: blob_esnap_clone_reload ...passed 00:07:51.862 Test: blob_esnap_hotplug ...passed 00:07:51.862 Suite: blob_copy_extent 00:07:51.862 Test: blob_init ...[2024-04-24 00:21:45.409186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:51.862 passed 00:07:51.862 Test: blob_thin_provision ...passed 00:07:51.862 Test: blob_read_only ...passed 00:07:51.862 Test: bs_load ...[2024-04-24 00:21:45.456691] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:51.862 passed 00:07:51.862 Test: bs_load_custom_cluster_size ...passed 00:07:51.862 Test: bs_load_after_failed_grow ...passed 00:07:51.862 Test: bs_cluster_sz ...[2024-04-24 00:21:45.480494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:51.862 [2024-04-24 00:21:45.480702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:51.862 [2024-04-24 00:21:45.480739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:51.862 passed 00:07:51.862 Test: bs_resize_md ...passed 00:07:51.862 Test: bs_destroy ...passed 00:07:51.862 Test: bs_type ...passed 00:07:51.862 Test: bs_super_block ...passed 00:07:51.862 Test: bs_test_recover_cluster_count ...passed 00:07:51.862 Test: bs_grow_live ...passed 00:07:51.862 Test: bs_grow_live_no_space ...passed 00:07:51.862 Test: bs_test_grow ...passed 00:07:51.862 Test: blob_serialize_test ...passed 00:07:51.862 Test: super_block_crc ...passed 00:07:51.862 Test: blob_thin_prov_write_count_io ...passed 00:07:51.862 Test: blob_thin_prov_unmap_cluster ...passed 00:07:51.862 Test: bs_load_iter_test ...passed 00:07:51.862 Test: blob_relations ...[2024-04-24 00:21:45.648725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:51.862 [2024-04-24 00:21:45.648820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.862 [2024-04-24 00:21:45.649406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:51.862 [2024-04-24 00:21:45.649447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.123 passed 00:07:52.123 Test: blob_relations2 ...[2024-04-24 00:21:45.663086] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.123 [2024-04-24 00:21:45.663184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.123 [2024-04-24 00:21:45.663212] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.123 [2024-04-24 00:21:45.663227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.123 [2024-04-24 00:21:45.664119] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.123 [2024-04-24 00:21:45.664165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.123 [2024-04-24 00:21:45.664442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:52.123 [2024-04-24 00:21:45.664477] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.123 passed 00:07:52.123 Test: blob_relations3 ...passed 00:07:52.123 Test: blobstore_clean_power_failure ...passed 00:07:52.123 Test: blob_delete_snapshot_power_failure ...[2024-04-24 00:21:45.815810] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:52.123 [2024-04-24 00:21:45.827611] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:52.123 [2024-04-24 00:21:45.839419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:52.123 [2024-04-24 00:21:45.839503] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:52.123 [2024-04-24 00:21:45.839528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.123 [2024-04-24 00:21:45.851296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:52.123 [2024-04-24 00:21:45.851378] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:52.123 [2024-04-24 00:21:45.851410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:52.123 [2024-04-24 00:21:45.851437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.123 [2024-04-24 00:21:45.863132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:52.123 [2024-04-24 00:21:45.863208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:52.123 [2024-04-24 00:21:45.863245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:52.123 [2024-04-24 00:21:45.863267] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.123 [2024-04-24 00:21:45.874996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:52.123 [2024-04-24 00:21:45.875096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.123 [2024-04-24 00:21:45.886887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:52.123 [2024-04-24 00:21:45.887033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.123 [2024-04-24 00:21:45.898852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:52.123 [2024-04-24 00:21:45.898961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:52.381 passed 00:07:52.381 Test: blob_create_snapshot_power_failure ...[2024-04-24 00:21:45.933934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:52.381 [2024-04-24 00:21:45.945260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:52.381 [2024-04-24 00:21:45.967748] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:52.381 [2024-04-24 00:21:45.979365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:52.381 passed 00:07:52.381 Test: blob_io_unit ...passed 00:07:52.381 Test: blob_io_unit_compatibility ...passed 00:07:52.381 Test: blob_ext_md_pages ...passed 00:07:52.381 Test: blob_esnap_io_4096_4096 ...passed 00:07:52.381 Test: blob_esnap_io_512_512 ...passed 00:07:52.381 Test: blob_esnap_io_4096_512 ...passed 00:07:52.381 Test: blob_esnap_io_512_4096 ...passed 00:07:52.381 Suite: blob_bs_copy_extent 00:07:52.639 Test: blob_open ...passed 00:07:52.639 Test: blob_create ...[2024-04-24 00:21:46.208164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:52.639 passed 00:07:52.639 Test: blob_create_loop ...passed 00:07:52.639 Test: blob_create_fail ...[2024-04-24 00:21:46.299742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:52.639 passed 00:07:52.639 Test: blob_create_internal ...passed 00:07:52.639 Test: blob_create_zero_extent ...passed 00:07:52.639 Test: blob_snapshot ...passed 00:07:52.898 Test: blob_clone ...passed 00:07:52.898 Test: blob_inflate ...[2024-04-24 00:21:46.466544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:52.898 passed 00:07:52.898 Test: blob_delete ...passed 00:07:52.898 Test: blob_resize_test ...[2024-04-24 00:21:46.530189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:52.898 passed 00:07:52.898 Test: channel_ops ...passed 00:07:52.898 Test: blob_super ...passed 00:07:52.898 Test: blob_rw_verify_iov ...passed 00:07:52.898 Test: blob_unmap ...passed 00:07:53.158 Test: blob_iter ...passed 00:07:53.158 Test: blob_parse_md ...passed 00:07:53.158 Test: bs_load_pending_removal ...passed 00:07:53.158 Test: bs_unload ...[2024-04-24 00:21:46.784479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:53.158 passed 00:07:53.158 Test: bs_usable_clusters ...passed 00:07:53.158 Test: blob_crc ...[2024-04-24 00:21:46.848639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:53.158 [2024-04-24 00:21:46.848735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:53.158 passed 00:07:53.158 Test: blob_flags ...passed 00:07:53.158 Test: bs_version ...passed 00:07:53.416 Test: blob_set_xattrs_test ...[2024-04-24 00:21:46.950480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:53.416 [2024-04-24 00:21:46.950599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:53.416 passed 00:07:53.416 Test: blob_thin_prov_alloc ...passed 00:07:53.416 Test: blob_insert_cluster_msg_test ...passed 00:07:53.416 Test: blob_thin_prov_rw ...passed 00:07:53.416 Test: blob_thin_prov_rle ...passed 00:07:53.674 Test: blob_thin_prov_rw_iov ...passed 00:07:53.674 Test: blob_snapshot_rw ...passed 00:07:53.674 Test: blob_snapshot_rw_iov ...passed 00:07:53.934 Test: blob_inflate_rw ...passed 00:07:53.934 Test: blob_snapshot_freeze_io ...passed 00:07:53.934 Test: blob_operation_split_rw ...passed 00:07:54.192 Test: blob_operation_split_rw_iov ...passed 00:07:54.192 Test: blob_simultaneous_operations ...[2024-04-24 00:21:47.895802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:54.192 [2024-04-24 00:21:47.895905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.192 [2024-04-24 00:21:47.896324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:54.192 [2024-04-24 00:21:47.896357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.192 [2024-04-24 00:21:47.898848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:54.192 [2024-04-24 00:21:47.898904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.192 [2024-04-24 00:21:47.899020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:54.192 [2024-04-24 00:21:47.899039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.192 passed 00:07:54.192 Test: blob_persist_test ...passed 00:07:54.476 Test: blob_decouple_snapshot ...passed 00:07:54.476 Test: blob_seek_io_unit ...passed 00:07:54.476 Test: blob_nested_freezes ...passed 00:07:54.476 Suite: blob_blob_copy_extent 00:07:54.476 Test: blob_write ...passed 00:07:54.476 Test: blob_read ...passed 00:07:54.476 Test: blob_rw_verify ...passed 00:07:54.476 Test: blob_rw_verify_iov_nomem ...passed 00:07:54.476 Test: blob_rw_iov_read_only ...passed 00:07:54.476 Test: blob_xattr ...passed 00:07:54.735 Test: blob_dirty_shutdown ...passed 00:07:54.735 Test: blob_is_degraded ...passed 00:07:54.735 Suite: blob_esnap_bs_copy_extent 00:07:54.735 Test: blob_esnap_create ...passed 00:07:54.735 Test: blob_esnap_thread_add_remove ...passed 00:07:54.735 Test: blob_esnap_clone_snapshot ...passed 00:07:54.735 Test: blob_esnap_clone_inflate ...passed 00:07:54.735 Test: blob_esnap_clone_decouple ...passed 00:07:54.992 Test: blob_esnap_clone_reload ...passed 00:07:54.992 Test: blob_esnap_hotplug ...passed 00:07:54.992 00:07:54.992 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.992 suites 16 16 n/a 0 0 00:07:54.992 tests 352 352 352 0 0 00:07:54.992 asserts 93211 93211 93211 0 n/a 00:07:54.992 00:07:54.992 Elapsed time = 12.600 seconds 00:07:54.992 00:21:48 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:54.992 00:07:54.992 00:07:54.992 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.992 http://cunit.sourceforge.net/ 00:07:54.992 00:07:54.992 00:07:54.992 Suite: blob_bdev 00:07:54.992 Test: create_bs_dev ...passed 00:07:54.992 Test: create_bs_dev_ro ...[2024-04-24 00:21:48.692040] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:54.992 passed 00:07:54.992 Test: create_bs_dev_rw ...passed 00:07:54.992 Test: claim_bs_dev ...passed 00:07:54.992 Test: claim_bs_dev_ro ...[2024-04-24 00:21:48.692634] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:54.992 passed 00:07:54.992 Test: deferred_destroy_refs ...passed 00:07:54.992 Test: deferred_destroy_channels ...passed 00:07:54.992 Test: deferred_destroy_threads ...passed 00:07:54.992 00:07:54.992 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.992 suites 1 1 n/a 0 0 00:07:54.992 tests 8 8 8 0 0 00:07:54.992 asserts 119 119 119 0 n/a 00:07:54.992 00:07:54.992 Elapsed time = 0.001 seconds 00:07:54.992 00:21:48 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:54.992 00:07:54.992 00:07:54.992 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.992 http://cunit.sourceforge.net/ 00:07:54.992 00:07:54.992 00:07:54.993 Suite: tree 00:07:54.993 Test: blobfs_tree_op_test ...passed 00:07:54.993 00:07:54.993 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.993 suites 1 1 n/a 0 0 00:07:54.993 tests 1 1 1 0 0 00:07:54.993 asserts 27 27 27 0 n/a 00:07:54.993 00:07:54.993 Elapsed time = 0.000 seconds 00:07:54.993 00:21:48 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:55.250 00:07:55.250 00:07:55.250 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.250 http://cunit.sourceforge.net/ 00:07:55.250 00:07:55.250 00:07:55.250 Suite: blobfs_async_ut 00:07:55.250 Test: fs_init ...passed 00:07:55.250 Test: fs_open ...passed 00:07:55.250 Test: fs_create ...passed 00:07:55.250 Test: fs_truncate ...passed 00:07:55.250 Test: fs_rename ...[2024-04-24 00:21:48.947210] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:55.250 passed 00:07:55.250 Test: fs_rw_async ...passed 00:07:55.250 Test: fs_writev_readv_async ...passed 00:07:55.250 Test: tree_find_buffer_ut ...passed 00:07:55.250 Test: channel_ops ...passed 00:07:55.250 Test: channel_ops_sync ...passed 00:07:55.250 00:07:55.250 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.250 suites 1 1 n/a 0 0 00:07:55.250 tests 10 10 10 0 0 00:07:55.250 asserts 292 292 292 0 n/a 00:07:55.250 00:07:55.250 Elapsed time = 0.207 seconds 00:07:55.507 00:21:49 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:55.507 00:07:55.507 00:07:55.507 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.507 http://cunit.sourceforge.net/ 00:07:55.507 00:07:55.507 00:07:55.507 Suite: blobfs_sync_ut 00:07:55.507 Test: cache_read_after_write ...[2024-04-24 00:21:49.172150] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:55.507 passed 00:07:55.507 Test: file_length ...passed 00:07:55.507 Test: append_write_to_extend_blob ...passed 00:07:55.507 Test: partial_buffer ...passed 00:07:55.507 Test: cache_write_null_buffer ...passed 00:07:55.507 Test: fs_create_sync ...passed 00:07:55.507 Test: fs_rename_sync ...passed 00:07:55.507 Test: cache_append_no_cache ...passed 00:07:55.765 Test: fs_delete_file_without_close ...passed 00:07:55.765 00:07:55.765 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.765 suites 1 1 n/a 0 0 00:07:55.765 tests 9 9 9 0 0 00:07:55.765 asserts 345 345 345 0 n/a 00:07:55.765 00:07:55.765 Elapsed time = 0.417 seconds 00:07:55.765 00:21:49 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:55.765 00:07:55.765 00:07:55.765 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.765 http://cunit.sourceforge.net/ 00:07:55.765 00:07:55.765 00:07:55.765 Suite: blobfs_bdev_ut 00:07:55.765 Test: spdk_blobfs_bdev_detect_test ...[2024-04-24 00:21:49.385552] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:55.765 passed 00:07:55.765 Test: spdk_blobfs_bdev_create_test ...[2024-04-24 00:21:49.386441] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:55.765 passed 00:07:55.766 Test: spdk_blobfs_bdev_mount_test ...passed 00:07:55.766 00:07:55.766 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.766 suites 1 1 n/a 0 0 00:07:55.766 tests 3 3 3 0 0 00:07:55.766 asserts 9 9 9 0 n/a 00:07:55.766 00:07:55.766 Elapsed time = 0.001 seconds 00:07:55.766 00:07:55.766 real 0m13.480s 00:07:55.766 user 0m12.858s 00:07:55.766 sys 0m0.852s 00:07:55.766 00:21:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:55.766 00:21:49 -- common/autotest_common.sh@10 -- # set +x 00:07:55.766 ************************************ 00:07:55.766 END TEST unittest_blob_blobfs 00:07:55.766 ************************************ 00:07:55.766 00:21:49 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:07:55.766 00:21:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.766 00:21:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.766 00:21:49 -- common/autotest_common.sh@10 -- # set +x 00:07:55.766 ************************************ 00:07:55.766 START TEST unittest_event 00:07:55.766 ************************************ 00:07:55.766 00:21:49 -- common/autotest_common.sh@1111 -- # unittest_event 00:07:55.766 00:21:49 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:55.766 00:07:55.766 00:07:55.766 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.766 http://cunit.sourceforge.net/ 00:07:55.766 00:07:55.766 00:07:55.766 Suite: app_suite 00:07:55.766 Test: test_spdk_app_parse_args ...app_ut [options] 00:07:55.766 00:07:55.766 CPU options: 00:07:55.766 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:55.766 (like [0,1,10]) 00:07:55.766 --lcores lcore to CPU mapping list. The list is in the format: 00:07:55.766 [<,lcores[@CPUs]>...] 00:07:55.766 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:55.766 Within the group, '-' is used for range separator, 00:07:55.766 ',' is used for single number separator. 00:07:55.766 '( )' can be omitted for single element group, 00:07:55.766 '@' can be omitted if cpus and lcores have the same value 00:07:55.766 --disable-cpumask-locks Disable CPU core lock files. 00:07:55.766 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:55.766 pollers in the app support interrupt mode) 00:07:55.766 -p, --main-core main (primary) core for DPDK 00:07:55.766 00:07:55.766 Configuration options: 00:07:55.766 -c, --config, --json JSON config file 00:07:55.766 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:55.766 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:55.766 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:55.766 --rpcs-allowed comma-separated list of permitted RPCS 00:07:55.766 --json-ignore-init-errors don't exit on invalid config entry 00:07:55.766 00:07:55.766 Memory options: 00:07:55.766 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:55.766 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:55.766 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:55.766 -R, --huge-unlink unlink huge files after initialization 00:07:55.766 -n, --mem-channels number of memory channels used for DPDK 00:07:55.766 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:55.766 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:55.766 --no-huge run without using hugepages 00:07:55.766 -i, --shm-id shared memory ID (optional) 00:07:55.766 -g, --single-file-segments force creating just one hugetlbfs file 00:07:55.766 00:07:55.766 PCI options: 00:07:55.766 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:55.766 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:55.766 -u, --no-pci disable PCI access 00:07:55.766 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:55.766 00:07:55.766 Log options: 00:07:55.766 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:55.766 --silence-noticelog disable notice level logging to stderr 00:07:55.766 00:07:55.766 Trace options: 00:07:55.766 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:55.766 setting 0 to disable trace (default 32768) 00:07:55.766 Tracepoints vary in size and can use more than one trace entry. 00:07:55.766 -e, --tpoint-group [:] 00:07:55.766 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:55.766 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:55.766 a tracepoint group. First tpoint inside a group can be enabled by 00:07:55.766 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:55.766 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:55.766 in /include/spdk_internal/trace_defs.h 00:07:55.766 00:07:55.766 Other options: 00:07:55.766 -h, --help show this usage 00:07:55.766 -v, --version print SPDK version 00:07:55.766 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:55.766 --env-context Opaque context for use of the env implementation 00:07:55.766 app_ut [options] 00:07:55.766 00:07:55.766 CPU options: 00:07:55.766 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:55.766 (like [0,1,10]) 00:07:55.766 --lcores lcore to CPU mapping list. The list is in the format: 00:07:55.766 [<,lcores[@CPUs]>...] 00:07:55.766 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:55.766 Within the group, '-' is used for range separator, 00:07:55.766 ',' is used for single number separator. 00:07:55.766 '( )' can be omitted for single element group, 00:07:55.766 '@' can be omitted if cpus and lcores have the same value 00:07:55.766 --disable-cpumask-locks Disable CPU core lock files. 00:07:55.766 app_ut: invalid option -- 'z' 00:07:55.766 app_ut: unrecognized option '--test-long-opt' 00:07:55.766 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:55.766 pollers in the app support interrupt mode) 00:07:55.766 -p, --main-core main (primary) core for DPDK 00:07:55.766 00:07:55.766 Configuration options: 00:07:55.766 -c, --config, --json JSON config file 00:07:55.766 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:55.766 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:55.766 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:55.766 --rpcs-allowed comma-separated list of permitted RPCS 00:07:55.766 --json-ignore-init-errors don't exit on invalid config entry 00:07:55.766 00:07:55.766 Memory options: 00:07:55.766 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:55.766 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:55.766 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:55.766 -R, --huge-unlink unlink huge files after initialization 00:07:55.766 -n, --mem-channels number of memory channels used for DPDK 00:07:55.766 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:55.766 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:55.766 --no-huge run without using hugepages 00:07:55.766 -i, --shm-id shared memory ID (optional) 00:07:55.766 -g, --single-file-segments force creating just one hugetlbfs file 00:07:55.766 00:07:55.766 PCI options: 00:07:55.766 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:55.766 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:55.766 -u, --no-pci disable PCI access 00:07:55.766 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:55.766 00:07:55.766 Log options: 00:07:55.766 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:55.766 --silence-noticelog disable notice level logging to stderr 00:07:55.766 00:07:55.766 Trace options: 00:07:55.766 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:55.766 setting 0 to disable trace (default 32768) 00:07:55.766 Tracepoints vary in size and can use more than one trace entry. 00:07:55.766 -e, --tpoint-group [:] 00:07:55.767 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:55.767 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:55.767 a tracepoint group. First tpoint inside a group can be enabled by 00:07:55.767 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:55.767 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:55.767 in /include/spdk_internal/trace_defs.h 00:07:55.767 00:07:55.767 Other options: 00:07:55.767 -h, --help show this usage 00:07:55.767 -v, --version print SPDK version 00:07:55.767 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:55.767 --env-context Opaque context for use of the env implementation 00:07:55.767 app_ut [options] 00:07:55.767 00:07:55.767 CPU options: 00:07:55.767 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:55.767 (like [0,1,10]) 00:07:55.767 --lcores lcore to CPU mapping list. The list is in the format: 00:07:55.767 [<,lcores[@CPUs]>...] 00:07:55.767 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:55.767 Within the group, '-' is used for range separator, 00:07:55.767 ',' is used for single number separator. 00:07:55.767 '( )' can be omitted for single element group, 00:07:55.767 '@' can be omitted if cpus and lcores have the same value 00:07:55.767 --disable-cpumask-locks Disable CPU core lock files. 00:07:55.767 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:55.767 pollers in the app support interrupt mode) 00:07:55.767 -p, --main-core main (primary) core for DPDK 00:07:55.767 00:07:55.767 Configuration options: 00:07:55.767 -c, --config, --json JSON config file 00:07:55.767 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:55.767 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:55.767 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:55.767 --rpcs-allowed comma-separated list of permitted RPCS 00:07:55.767 --json-ignore-init-errors don't exit on invalid config entry 00:07:55.767 00:07:55.767 Memory options: 00:07:55.767 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:55.767 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:55.767 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:55.767 -R, --huge-unlink unlink huge files after initialization 00:07:55.767 -n, --mem-channels number of memory channels used for DPDK 00:07:55.767 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:55.767 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:55.767 --no-huge run without using hugepages 00:07:55.767 -i, --shm-id shared memory ID (optional) 00:07:55.767 -g, --single-file-segments force creating just one hugetlbfs file 00:07:55.767 00:07:55.767 PCI options: 00:07:55.767 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:55.767 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:55.767 -u, --no-pci disable PCI access 00:07:55.767 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:55.767 00:07:55.767 Log options: 00:07:55.767 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:55.767 --silence-noticelog disable notice level logging to stderr 00:07:55.767 00:07:55.767 Trace options: 00:07:55.767 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:55.767 setting 0 to disable trace (default 32768) 00:07:55.767 Tracepoints vary in size and can use more than one trace entry. 00:07:55.767 -e, --tpoint-group [:] 00:07:55.767 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:55.767 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:55.767 a tracepoint group. First tpoint inside a group can be enabled by 00:07:55.767 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:55.767 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:55.767 in /include/spdk_internal/trace_defs.h 00:07:55.767 00:07:55.767 Other options: 00:07:55.767 -h, --help show this usage 00:07:55.767 -v, --version print SPDK version 00:07:55.767 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:55.767 --env-context Opaque context for use of the env implementation 00:07:55.767 passed 00:07:55.767 00:07:55.767 [2024-04-24 00:21:49.522824] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1105:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:55.767 [2024-04-24 00:21:49.523266] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1286:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:55.767 [2024-04-24 00:21:49.523597] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1191:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:55.767 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.767 suites 1 1 n/a 0 0 00:07:55.767 tests 1 1 1 0 0 00:07:55.767 asserts 8 8 8 0 n/a 00:07:55.767 00:07:55.767 Elapsed time = 0.002 seconds 00:07:55.767 00:21:49 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:56.025 00:07:56.025 00:07:56.025 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.025 http://cunit.sourceforge.net/ 00:07:56.025 00:07:56.025 00:07:56.025 Suite: app_suite 00:07:56.025 Test: test_create_reactor ...passed 00:07:56.025 Test: test_init_reactors ...passed 00:07:56.025 Test: test_event_call ...passed 00:07:56.025 Test: test_schedule_thread ...passed 00:07:56.025 Test: test_reschedule_thread ...passed 00:07:56.025 Test: test_bind_thread ...passed 00:07:56.025 Test: test_for_each_reactor ...passed 00:07:56.025 Test: test_reactor_stats ...passed 00:07:56.025 Test: test_scheduler ...passed 00:07:56.025 Test: test_governor ...passed 00:07:56.025 00:07:56.025 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.025 suites 1 1 n/a 0 0 00:07:56.025 tests 10 10 10 0 0 00:07:56.025 asserts 344 344 344 0 n/a 00:07:56.025 00:07:56.025 Elapsed time = 0.022 seconds 00:07:56.025 00:07:56.025 real 0m0.111s 00:07:56.025 user 0m0.044s 00:07:56.025 sys 0m0.066s 00:07:56.025 00:21:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.025 00:21:49 -- common/autotest_common.sh@10 -- # set +x 00:07:56.025 ************************************ 00:07:56.025 END TEST unittest_event 00:07:56.025 ************************************ 00:07:56.025 00:21:49 -- unit/unittest.sh@233 -- # uname -s 00:07:56.025 00:21:49 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:07:56.025 00:21:49 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:07:56.025 00:21:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:56.025 00:21:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.025 00:21:49 -- common/autotest_common.sh@10 -- # set +x 00:07:56.025 ************************************ 00:07:56.025 START TEST unittest_ftl 00:07:56.025 ************************************ 00:07:56.025 00:21:49 -- common/autotest_common.sh@1111 -- # unittest_ftl 00:07:56.025 00:21:49 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:56.025 00:07:56.025 00:07:56.025 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.025 http://cunit.sourceforge.net/ 00:07:56.025 00:07:56.025 00:07:56.025 Suite: ftl_band_suite 00:07:56.025 Test: test_band_block_offset_from_addr_base ...passed 00:07:56.283 Test: test_band_block_offset_from_addr_offset ...passed 00:07:56.283 Test: test_band_addr_from_block_offset ...passed 00:07:56.283 Test: test_band_set_addr ...passed 00:07:56.283 Test: test_invalidate_addr ...passed 00:07:56.283 Test: test_next_xfer_addr ...passed 00:07:56.283 00:07:56.283 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.283 suites 1 1 n/a 0 0 00:07:56.283 tests 6 6 6 0 0 00:07:56.283 asserts 30356 30356 30356 0 n/a 00:07:56.283 00:07:56.283 Elapsed time = 0.237 seconds 00:07:56.597 00:21:50 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:56.597 00:07:56.597 00:07:56.597 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.597 http://cunit.sourceforge.net/ 00:07:56.597 00:07:56.597 00:07:56.597 Suite: ftl_bitmap 00:07:56.597 Test: test_ftl_bitmap_create ...[2024-04-24 00:21:50.087999] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:56.597 [2024-04-24 00:21:50.088582] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:56.597 passed 00:07:56.597 Test: test_ftl_bitmap_get ...passed 00:07:56.597 Test: test_ftl_bitmap_set ...passed 00:07:56.597 Test: test_ftl_bitmap_clear ...passed 00:07:56.597 Test: test_ftl_bitmap_find_first_set ...passed 00:07:56.597 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:56.597 Test: test_ftl_bitmap_count_set ...passed 00:07:56.597 00:07:56.597 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.597 suites 1 1 n/a 0 0 00:07:56.597 tests 7 7 7 0 0 00:07:56.597 asserts 137 137 137 0 n/a 00:07:56.597 00:07:56.597 Elapsed time = 0.001 seconds 00:07:56.597 00:21:50 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:56.597 00:07:56.597 00:07:56.597 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.597 http://cunit.sourceforge.net/ 00:07:56.597 00:07:56.597 00:07:56.597 Suite: ftl_io_suite 00:07:56.597 Test: test_completion ...passed 00:07:56.597 Test: test_multiple_ios ...passed 00:07:56.597 00:07:56.597 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.597 suites 1 1 n/a 0 0 00:07:56.597 tests 2 2 2 0 0 00:07:56.597 asserts 47 47 47 0 n/a 00:07:56.597 00:07:56.597 Elapsed time = 0.003 seconds 00:07:56.597 00:21:50 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:56.597 00:07:56.597 00:07:56.597 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.597 http://cunit.sourceforge.net/ 00:07:56.597 00:07:56.597 00:07:56.597 Suite: ftl_mngt 00:07:56.597 Test: test_next_step ...passed 00:07:56.597 Test: test_continue_step ...passed 00:07:56.597 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:56.597 Test: test_fail_step ...passed 00:07:56.597 Test: test_mngt_call_and_call_rollback ...passed 00:07:56.597 Test: test_nested_process_failure ...passed 00:07:56.597 00:07:56.597 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.597 suites 1 1 n/a 0 0 00:07:56.597 tests 6 6 6 0 0 00:07:56.597 asserts 176 176 176 0 n/a 00:07:56.597 00:07:56.597 Elapsed time = 0.002 seconds 00:07:56.597 00:21:50 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:56.597 00:07:56.597 00:07:56.597 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.597 http://cunit.sourceforge.net/ 00:07:56.597 00:07:56.597 00:07:56.597 Suite: ftl_mempool 00:07:56.597 Test: test_ftl_mempool_create ...passed 00:07:56.597 Test: test_ftl_mempool_get_put ...passed 00:07:56.597 00:07:56.597 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.597 suites 1 1 n/a 0 0 00:07:56.597 tests 2 2 2 0 0 00:07:56.597 asserts 36 36 36 0 n/a 00:07:56.597 00:07:56.597 Elapsed time = 0.000 seconds 00:07:56.597 00:21:50 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:56.597 00:07:56.597 00:07:56.597 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.597 http://cunit.sourceforge.net/ 00:07:56.597 00:07:56.597 00:07:56.597 Suite: ftl_addr64_suite 00:07:56.597 Test: test_addr_cached ...passed 00:07:56.597 00:07:56.597 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.597 suites 1 1 n/a 0 0 00:07:56.597 tests 1 1 1 0 0 00:07:56.597 asserts 1536 1536 1536 0 n/a 00:07:56.597 00:07:56.597 Elapsed time = 0.001 seconds 00:07:56.597 00:21:50 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:56.597 00:07:56.597 00:07:56.597 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.597 http://cunit.sourceforge.net/ 00:07:56.597 00:07:56.597 00:07:56.597 Suite: ftl_sb 00:07:56.597 Test: test_sb_crc_v2 ...passed 00:07:56.597 Test: test_sb_crc_v3 ...passed 00:07:56.597 Test: test_sb_v3_md_layout ...[2024-04-24 00:21:50.288779] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:56.597 [2024-04-24 00:21:50.289451] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:56.597 [2024-04-24 00:21:50.289673] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:56.597 [2024-04-24 00:21:50.289885] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:56.597 [2024-04-24 00:21:50.290081] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:56.597 [2024-04-24 00:21:50.290350] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:56.597 [2024-04-24 00:21:50.290564] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:56.597 [2024-04-24 00:21:50.290814] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:56.597 [2024-04-24 00:21:50.291090] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:56.597 [2024-04-24 00:21:50.291301] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:56.597 [2024-04-24 00:21:50.291498] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:56.597 passed 00:07:56.597 Test: test_sb_v5_md_layout ...passed 00:07:56.597 00:07:56.597 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.597 suites 1 1 n/a 0 0 00:07:56.597 tests 4 4 4 0 0 00:07:56.597 asserts 148 148 148 0 n/a 00:07:56.597 00:07:56.597 Elapsed time = 0.004 seconds 00:07:56.597 00:21:50 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:56.597 00:07:56.597 00:07:56.597 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.597 http://cunit.sourceforge.net/ 00:07:56.597 00:07:56.597 00:07:56.597 Suite: ftl_layout_upgrade 00:07:56.597 Test: test_l2p_upgrade ...passed 00:07:56.597 00:07:56.597 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.597 suites 1 1 n/a 0 0 00:07:56.597 tests 1 1 1 0 0 00:07:56.597 asserts 140 140 140 0 n/a 00:07:56.597 00:07:56.597 Elapsed time = 0.001 seconds 00:07:56.597 00:07:56.597 real 0m0.633s 00:07:56.597 user 0m0.257s 00:07:56.597 sys 0m0.368s 00:07:56.597 00:21:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.597 00:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:56.597 ************************************ 00:07:56.597 END TEST unittest_ftl 00:07:56.597 ************************************ 00:07:56.855 00:21:50 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:56.855 00:21:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:56.855 00:21:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.855 00:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:56.855 ************************************ 00:07:56.855 START TEST unittest_accel 00:07:56.855 ************************************ 00:07:56.855 00:21:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:56.855 00:07:56.855 00:07:56.855 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.855 http://cunit.sourceforge.net/ 00:07:56.855 00:07:56.855 00:07:56.855 Suite: accel_sequence 00:07:56.855 Test: test_sequence_fill_copy ...passed 00:07:56.855 Test: test_sequence_abort ...passed 00:07:56.855 Test: test_sequence_append_error ...passed 00:07:56.855 Test: test_sequence_completion_error ...[2024-04-24 00:21:50.490022] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1934:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f8c206c27c0 00:07:56.855 [2024-04-24 00:21:50.490607] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1934:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f8c206c27c0 00:07:56.855 [2024-04-24 00:21:50.490878] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1844:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f8c206c27c0 00:07:56.855 [2024-04-24 00:21:50.491141] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1844:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f8c206c27c0 00:07:56.855 passed 00:07:56.855 Test: test_sequence_decompress ...passed 00:07:56.855 Test: test_sequence_reverse ...passed 00:07:56.855 Test: test_sequence_copy_elision ...passed 00:07:56.855 Test: test_sequence_accel_buffers ...passed 00:07:56.855 Test: test_sequence_memory_domain ...[2024-04-24 00:21:50.505984] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1736:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:56.855 [2024-04-24 00:21:50.506376] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1775:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:56.855 passed 00:07:56.855 Test: test_sequence_module_memory_domain ...passed 00:07:56.855 Test: test_sequence_crypto ...passed 00:07:56.855 Test: test_sequence_driver ...[2024-04-24 00:21:50.515083] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1883:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f8c1fa7a7c0 using driver: ut 00:07:56.855 [2024-04-24 00:21:50.515359] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1947:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f8c1fa7a7c0 through driver: ut 00:07:56.855 passed 00:07:56.855 Test: test_sequence_same_iovs ...passed 00:07:56.855 Test: test_sequence_crc32 ...passed 00:07:56.855 Suite: accel 00:07:56.855 Test: test_spdk_accel_task_complete ...passed 00:07:56.855 Test: test_get_task ...passed 00:07:56.855 Test: test_spdk_accel_submit_copy ...passed 00:07:56.855 Test: test_spdk_accel_submit_dualcast ...[2024-04-24 00:21:50.522234] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 433:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:56.855 [2024-04-24 00:21:50.522374] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 433:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:56.855 passed 00:07:56.855 Test: test_spdk_accel_submit_compare ...passed 00:07:56.855 Test: test_spdk_accel_submit_fill ...passed 00:07:56.855 Test: test_spdk_accel_submit_crc32c ...passed 00:07:56.856 Test: test_spdk_accel_submit_crc32cv ...passed 00:07:56.856 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:56.856 Test: test_spdk_accel_submit_xor ...passed 00:07:56.856 Test: test_spdk_accel_module_find_by_name ...passed 00:07:56.856 Test: test_spdk_accel_module_register ...passed 00:07:56.856 00:07:56.856 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.856 suites 2 2 n/a 0 0 00:07:56.856 tests 26 26 26 0 0 00:07:56.856 asserts 831 831 831 0 n/a 00:07:56.856 00:07:56.856 Elapsed time = 0.042 seconds 00:07:56.856 00:07:56.856 real 0m0.093s 00:07:56.856 user 0m0.048s 00:07:56.856 sys 0m0.040s 00:07:56.856 00:21:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.856 00:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:56.856 ************************************ 00:07:56.856 END TEST unittest_accel 00:07:56.856 ************************************ 00:07:56.856 00:21:50 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:56.856 00:21:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:56.856 00:21:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.856 00:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:57.114 ************************************ 00:07:57.114 START TEST unittest_ioat 00:07:57.114 ************************************ 00:07:57.114 00:21:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:57.114 00:07:57.114 00:07:57.114 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.114 http://cunit.sourceforge.net/ 00:07:57.114 00:07:57.114 00:07:57.114 Suite: ioat 00:07:57.114 Test: ioat_state_check ...passed 00:07:57.114 00:07:57.114 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.114 suites 1 1 n/a 0 0 00:07:57.114 tests 1 1 1 0 0 00:07:57.114 asserts 32 32 32 0 n/a 00:07:57.114 00:07:57.114 Elapsed time = 0.000 seconds 00:07:57.114 00:07:57.114 real 0m0.040s 00:07:57.114 user 0m0.020s 00:07:57.114 sys 0m0.020s 00:07:57.114 00:21:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.114 00:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:57.114 ************************************ 00:07:57.114 END TEST unittest_ioat 00:07:57.114 ************************************ 00:07:57.114 00:21:50 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:57.114 00:21:50 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:57.114 00:21:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:57.114 00:21:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.114 00:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:57.114 ************************************ 00:07:57.114 START TEST unittest_idxd_user 00:07:57.114 ************************************ 00:07:57.114 00:21:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:57.114 00:07:57.114 00:07:57.114 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.114 http://cunit.sourceforge.net/ 00:07:57.114 00:07:57.114 00:07:57.114 Suite: idxd_user 00:07:57.114 Test: test_idxd_wait_cmd ...[2024-04-24 00:21:50.821036] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:57.114 [2024-04-24 00:21:50.821641] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:57.114 passed 00:07:57.114 Test: test_idxd_reset_dev ...[2024-04-24 00:21:50.822525] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:57.114 [2024-04-24 00:21:50.822992] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:57.114 passed 00:07:57.114 Test: test_idxd_group_config ...passed 00:07:57.114 Test: test_idxd_wq_config ...passed 00:07:57.114 00:07:57.114 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.114 suites 1 1 n/a 0 0 00:07:57.114 tests 4 4 4 0 0 00:07:57.114 asserts 20 20 20 0 n/a 00:07:57.114 00:07:57.114 Elapsed time = 0.002 seconds 00:07:57.114 00:07:57.114 real 0m0.037s 00:07:57.114 user 0m0.016s 00:07:57.114 sys 0m0.018s 00:07:57.114 00:21:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.114 00:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:57.114 ************************************ 00:07:57.114 END TEST unittest_idxd_user 00:07:57.114 ************************************ 00:07:57.114 00:21:50 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:07:57.114 00:21:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:57.114 00:21:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.114 00:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:57.373 ************************************ 00:07:57.373 START TEST unittest_iscsi 00:07:57.374 ************************************ 00:07:57.374 00:21:50 -- common/autotest_common.sh@1111 -- # unittest_iscsi 00:07:57.374 00:21:50 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:57.374 00:07:57.374 00:07:57.374 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.374 http://cunit.sourceforge.net/ 00:07:57.374 00:07:57.374 00:07:57.374 Suite: conn_suite 00:07:57.374 Test: read_task_split_in_order_case ...passed 00:07:57.374 Test: read_task_split_reverse_order_case ...passed 00:07:57.374 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:57.374 Test: process_non_read_task_completion_test ...passed 00:07:57.374 Test: free_tasks_on_connection ...passed 00:07:57.374 Test: free_tasks_with_queued_datain ...passed 00:07:57.374 Test: abort_queued_datain_task_test ...passed 00:07:57.374 Test: abort_queued_datain_tasks_test ...passed 00:07:57.374 00:07:57.374 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.374 suites 1 1 n/a 0 0 00:07:57.374 tests 8 8 8 0 0 00:07:57.374 asserts 230 230 230 0 n/a 00:07:57.374 00:07:57.374 Elapsed time = 0.001 seconds 00:07:57.374 00:21:50 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:57.374 00:07:57.374 00:07:57.374 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.374 http://cunit.sourceforge.net/ 00:07:57.374 00:07:57.374 00:07:57.374 Suite: iscsi_suite 00:07:57.374 Test: param_negotiation_test ...passed 00:07:57.374 Test: list_negotiation_test ...passed 00:07:57.374 Test: parse_valid_test ...passed 00:07:57.374 Test: parse_invalid_test ...[2024-04-24 00:21:51.024628] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:07:57.374 [2024-04-24 00:21:51.025331] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:07:57.374 [2024-04-24 00:21:51.025603] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:07:57.374 [2024-04-24 00:21:51.025902] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:57.374 [2024-04-24 00:21:51.026314] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:57.374 [2024-04-24 00:21:51.026621] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:57.374 [2024-04-24 00:21:51.027264] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:57.374 passed 00:07:57.374 00:07:57.374 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.374 suites 1 1 n/a 0 0 00:07:57.374 tests 4 4 4 0 0 00:07:57.374 asserts 161 161 161 0 n/a 00:07:57.374 00:07:57.374 Elapsed time = 0.008 seconds 00:07:57.374 00:21:51 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:07:57.374 00:07:57.374 00:07:57.374 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.374 http://cunit.sourceforge.net/ 00:07:57.374 00:07:57.374 00:07:57.374 Suite: iscsi_target_node_suite 00:07:57.374 Test: add_lun_test_cases ...[2024-04-24 00:21:51.075550] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:07:57.374 [2024-04-24 00:21:51.075892] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:07:57.374 passed 00:07:57.374 Test: allow_any_allowed ...passed 00:07:57.374 Test: allow_ipv6_allowed ...passed 00:07:57.374 Test: allow_ipv6_denied ...[2024-04-24 00:21:51.075999] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:57.374 [2024-04-24 00:21:51.076049] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:57.374 [2024-04-24 00:21:51.076089] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:07:57.374 passed 00:07:57.374 Test: allow_ipv6_invalid ...passed 00:07:57.374 Test: allow_ipv4_allowed ...passed 00:07:57.374 Test: allow_ipv4_denied ...passed 00:07:57.374 Test: allow_ipv4_invalid ...passed 00:07:57.374 Test: node_access_allowed ...passed 00:07:57.374 Test: node_access_denied_by_empty_netmask ...passed 00:07:57.374 Test: node_access_multi_initiator_groups_cases ...passed 00:07:57.374 Test: allow_iscsi_name_multi_maps_case ...passed 00:07:57.374 Test: chap_param_test_cases ...[2024-04-24 00:21:51.076553] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:07:57.374 [2024-04-24 00:21:51.076601] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:07:57.374 [2024-04-24 00:21:51.076668] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:07:57.374 passed 00:07:57.374 00:07:57.374 [2024-04-24 00:21:51.076709] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:07:57.374 [2024-04-24 00:21:51.076751] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:07:57.374 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.374 suites 1 1 n/a 0 0 00:07:57.374 tests 13 13 13 0 0 00:07:57.374 asserts 50 50 50 0 n/a 00:07:57.374 00:07:57.374 Elapsed time = 0.001 seconds 00:07:57.374 00:21:51 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:07:57.374 00:07:57.374 00:07:57.374 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.374 http://cunit.sourceforge.net/ 00:07:57.374 00:07:57.374 00:07:57.374 Suite: iscsi_suite 00:07:57.374 Test: op_login_check_target_test ...[2024-04-24 00:21:51.128758] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:07:57.374 passed 00:07:57.374 Test: op_login_session_normal_test ...[2024-04-24 00:21:51.129186] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:57.374 [2024-04-24 00:21:51.129247] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:57.374 [2024-04-24 00:21:51.129299] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:57.374 [2024-04-24 00:21:51.129398] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:07:57.374 [2024-04-24 00:21:51.129531] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:57.374 [2024-04-24 00:21:51.129648] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:07:57.374 [2024-04-24 00:21:51.129714] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:57.374 passed 00:07:57.374 Test: maxburstlength_test ...[2024-04-24 00:21:51.130076] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:57.374 passed 00:07:57.374 Test: underflow_for_read_transfer_test ...[2024-04-24 00:21:51.130159] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:07:57.374 passed 00:07:57.374 Test: underflow_for_zero_read_transfer_test ...passed 00:07:57.374 Test: underflow_for_request_sense_test ...passed 00:07:57.374 Test: underflow_for_check_condition_test ...passed 00:07:57.374 Test: add_transfer_task_test ...passed 00:07:57.374 Test: get_transfer_task_test ...passed 00:07:57.374 Test: del_transfer_task_test ...passed 00:07:57.374 Test: clear_all_transfer_tasks_test ...passed 00:07:57.374 Test: build_iovs_test ...passed 00:07:57.374 Test: build_iovs_with_md_test ...passed 00:07:57.374 Test: pdu_hdr_op_login_test ...[2024-04-24 00:21:51.131892] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:07:57.374 [2024-04-24 00:21:51.132052] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:07:57.374 [2024-04-24 00:21:51.132165] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:07:57.374 passed 00:07:57.374 Test: pdu_hdr_op_text_test ...[2024-04-24 00:21:51.132293] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:57.374 passed 00:07:57.374 Test: pdu_hdr_op_logout_test ...[2024-04-24 00:21:51.132395] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:07:57.374 [2024-04-24 00:21:51.132452] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:07:57.374 [2024-04-24 00:21:51.132556] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:07:57.374 passed 00:07:57.374 Test: pdu_hdr_op_scsi_test ...[2024-04-24 00:21:51.132729] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:57.374 [2024-04-24 00:21:51.132776] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:57.374 [2024-04-24 00:21:51.132840] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:07:57.374 [2024-04-24 00:21:51.132952] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:57.374 [2024-04-24 00:21:51.133069] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:07:57.374 [2024-04-24 00:21:51.133269] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:57.374 passed 00:07:57.374 Test: pdu_hdr_op_task_mgmt_test ...[2024-04-24 00:21:51.133410] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:07:57.374 [2024-04-24 00:21:51.133508] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:07:57.374 passed 00:07:57.374 Test: pdu_hdr_op_nopout_test ...[2024-04-24 00:21:51.133772] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:07:57.375 [2024-04-24 00:21:51.133859] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:57.375 [2024-04-24 00:21:51.133902] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:57.375 passed 00:07:57.375 Test: pdu_hdr_op_data_test ...[2024-04-24 00:21:51.133951] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:07:57.375 [2024-04-24 00:21:51.134003] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:07:57.375 [2024-04-24 00:21:51.134093] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:57.375 [2024-04-24 00:21:51.134176] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:57.375 [2024-04-24 00:21:51.134235] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:07:57.375 [2024-04-24 00:21:51.134314] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:07:57.375 [2024-04-24 00:21:51.134411] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:07:57.375 passed 00:07:57.375 Test: empty_text_with_cbit_test ...passed 00:07:57.375 Test: pdu_payload_read_test ...[2024-04-24 00:21:51.134452] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:07:57.375 [2024-04-24 00:21:51.136720] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:07:57.375 passed 00:07:57.375 Test: data_out_pdu_sequence_test ...passed 00:07:57.375 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:07:57.375 00:07:57.375 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.375 suites 1 1 n/a 0 0 00:07:57.375 tests 24 24 24 0 0 00:07:57.375 asserts 150253 150253 150253 0 n/a 00:07:57.375 00:07:57.375 Elapsed time = 0.018 seconds 00:07:57.633 00:21:51 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:07:57.633 00:07:57.633 00:07:57.633 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.633 http://cunit.sourceforge.net/ 00:07:57.633 00:07:57.633 00:07:57.633 Suite: init_grp_suite 00:07:57.633 Test: create_initiator_group_success_case ...passed 00:07:57.633 Test: find_initiator_group_success_case ...passed 00:07:57.633 Test: register_initiator_group_twice_case ...passed 00:07:57.633 Test: add_initiator_name_success_case ...passed 00:07:57.633 Test: add_initiator_name_fail_case ...[2024-04-24 00:21:51.190383] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:07:57.633 passed 00:07:57.633 Test: delete_all_initiator_names_success_case ...passed 00:07:57.633 Test: add_netmask_success_case ...passed 00:07:57.633 Test: add_netmask_fail_case ...[2024-04-24 00:21:51.190949] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:07:57.633 passed 00:07:57.633 Test: delete_all_netmasks_success_case ...passed 00:07:57.633 Test: initiator_name_overwrite_all_to_any_case ...passed 00:07:57.633 Test: netmask_overwrite_all_to_any_case ...passed 00:07:57.633 Test: add_delete_initiator_names_case ...passed 00:07:57.633 Test: add_duplicated_initiator_names_case ...passed 00:07:57.633 Test: delete_nonexisting_initiator_names_case ...passed 00:07:57.633 Test: add_delete_netmasks_case ...passed 00:07:57.633 Test: add_duplicated_netmasks_case ...passed 00:07:57.633 Test: delete_nonexisting_netmasks_case ...passed 00:07:57.633 00:07:57.633 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.633 suites 1 1 n/a 0 0 00:07:57.633 tests 17 17 17 0 0 00:07:57.633 asserts 108 108 108 0 n/a 00:07:57.633 00:07:57.633 Elapsed time = 0.001 seconds 00:07:57.633 00:21:51 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:07:57.633 00:07:57.633 00:07:57.633 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.633 http://cunit.sourceforge.net/ 00:07:57.633 00:07:57.633 00:07:57.633 Suite: portal_grp_suite 00:07:57.633 Test: portal_create_ipv4_normal_case ...passed 00:07:57.633 Test: portal_create_ipv6_normal_case ...passed 00:07:57.633 Test: portal_create_ipv4_wildcard_case ...passed 00:07:57.633 Test: portal_create_ipv6_wildcard_case ...passed 00:07:57.633 Test: portal_create_twice_case ...[2024-04-24 00:21:51.229873] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:07:57.633 passed 00:07:57.633 Test: portal_grp_register_unregister_case ...passed 00:07:57.633 Test: portal_grp_register_twice_case ...passed 00:07:57.633 Test: portal_grp_add_delete_case ...passed 00:07:57.633 Test: portal_grp_add_delete_twice_case ...passed 00:07:57.633 00:07:57.633 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.633 suites 1 1 n/a 0 0 00:07:57.633 tests 9 9 9 0 0 00:07:57.633 asserts 44 44 44 0 n/a 00:07:57.633 00:07:57.633 Elapsed time = 0.004 seconds 00:07:57.633 00:07:57.633 real 0m0.308s 00:07:57.633 user 0m0.147s 00:07:57.633 sys 0m0.157s 00:07:57.633 00:21:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.633 ************************************ 00:07:57.633 END TEST unittest_iscsi 00:07:57.633 00:21:51 -- common/autotest_common.sh@10 -- # set +x 00:07:57.633 ************************************ 00:07:57.633 00:21:51 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:07:57.633 00:21:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:57.633 00:21:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.633 00:21:51 -- common/autotest_common.sh@10 -- # set +x 00:07:57.633 ************************************ 00:07:57.633 START TEST unittest_json 00:07:57.633 ************************************ 00:07:57.633 00:21:51 -- common/autotest_common.sh@1111 -- # unittest_json 00:07:57.633 00:21:51 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:07:57.633 00:07:57.633 00:07:57.633 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.633 http://cunit.sourceforge.net/ 00:07:57.633 00:07:57.633 00:07:57.633 Suite: json 00:07:57.634 Test: test_parse_literal ...passed 00:07:57.634 Test: test_parse_string_simple ...passed 00:07:57.634 Test: test_parse_string_control_chars ...passed 00:07:57.634 Test: test_parse_string_utf8 ...passed 00:07:57.634 Test: test_parse_string_escapes_twochar ...passed 00:07:57.634 Test: test_parse_string_escapes_unicode ...passed 00:07:57.634 Test: test_parse_number ...passed 00:07:57.634 Test: test_parse_array ...passed 00:07:57.634 Test: test_parse_object ...passed 00:07:57.634 Test: test_parse_nesting ...passed 00:07:57.634 Test: test_parse_comment ...passed 00:07:57.634 00:07:57.634 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.634 suites 1 1 n/a 0 0 00:07:57.634 tests 11 11 11 0 0 00:07:57.634 asserts 1516 1516 1516 0 n/a 00:07:57.634 00:07:57.634 Elapsed time = 0.002 seconds 00:07:57.634 00:21:51 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:07:57.634 00:07:57.634 00:07:57.634 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.634 http://cunit.sourceforge.net/ 00:07:57.634 00:07:57.634 00:07:57.634 Suite: json 00:07:57.634 Test: test_strequal ...passed 00:07:57.634 Test: test_num_to_uint16 ...passed 00:07:57.634 Test: test_num_to_int32 ...passed 00:07:57.634 Test: test_num_to_uint64 ...passed 00:07:57.634 Test: test_decode_object ...passed 00:07:57.634 Test: test_decode_array ...passed 00:07:57.634 Test: test_decode_bool ...passed 00:07:57.634 Test: test_decode_uint16 ...passed 00:07:57.634 Test: test_decode_int32 ...passed 00:07:57.634 Test: test_decode_uint32 ...passed 00:07:57.634 Test: test_decode_uint64 ...passed 00:07:57.634 Test: test_decode_string ...passed 00:07:57.634 Test: test_decode_uuid ...passed 00:07:57.634 Test: test_find ...passed 00:07:57.634 Test: test_find_array ...passed 00:07:57.634 Test: test_iterating ...passed 00:07:57.634 Test: test_free_object ...passed 00:07:57.634 00:07:57.634 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.634 suites 1 1 n/a 0 0 00:07:57.634 tests 17 17 17 0 0 00:07:57.634 asserts 236 236 236 0 n/a 00:07:57.634 00:07:57.634 Elapsed time = 0.001 seconds 00:07:57.893 00:21:51 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:07:57.893 00:07:57.893 00:07:57.893 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.893 http://cunit.sourceforge.net/ 00:07:57.893 00:07:57.893 00:07:57.893 Suite: json 00:07:57.893 Test: test_write_literal ...passed 00:07:57.893 Test: test_write_string_simple ...passed 00:07:57.893 Test: test_write_string_escapes ...passed 00:07:57.893 Test: test_write_string_utf16le ...passed 00:07:57.893 Test: test_write_number_int32 ...passed 00:07:57.893 Test: test_write_number_uint32 ...passed 00:07:57.893 Test: test_write_number_uint128 ...passed 00:07:57.893 Test: test_write_string_number_uint128 ...passed 00:07:57.893 Test: test_write_number_int64 ...passed 00:07:57.893 Test: test_write_number_uint64 ...passed 00:07:57.893 Test: test_write_number_double ...passed 00:07:57.893 Test: test_write_uuid ...passed 00:07:57.893 Test: test_write_array ...passed 00:07:57.893 Test: test_write_object ...passed 00:07:57.893 Test: test_write_nesting ...passed 00:07:57.893 Test: test_write_val ...passed 00:07:57.893 00:07:57.893 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.893 suites 1 1 n/a 0 0 00:07:57.893 tests 16 16 16 0 0 00:07:57.893 asserts 918 918 918 0 n/a 00:07:57.893 00:07:57.893 Elapsed time = 0.005 seconds 00:07:57.893 00:21:51 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:07:57.893 00:07:57.893 00:07:57.893 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.893 http://cunit.sourceforge.net/ 00:07:57.893 00:07:57.893 00:07:57.893 Suite: jsonrpc 00:07:57.893 Test: test_parse_request ...passed 00:07:57.893 Test: test_parse_request_streaming ...passed 00:07:57.893 00:07:57.893 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.893 suites 1 1 n/a 0 0 00:07:57.893 tests 2 2 2 0 0 00:07:57.893 asserts 289 289 289 0 n/a 00:07:57.893 00:07:57.893 Elapsed time = 0.005 seconds 00:07:57.893 00:07:57.893 real 0m0.170s 00:07:57.893 user 0m0.076s 00:07:57.893 sys 0m0.096s 00:07:57.893 00:21:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.893 00:21:51 -- common/autotest_common.sh@10 -- # set +x 00:07:57.893 ************************************ 00:07:57.893 END TEST unittest_json 00:07:57.893 ************************************ 00:07:57.893 00:21:51 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:07:57.893 00:21:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:57.893 00:21:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.893 00:21:51 -- common/autotest_common.sh@10 -- # set +x 00:07:57.893 ************************************ 00:07:57.893 START TEST unittest_rpc 00:07:57.893 ************************************ 00:07:57.893 00:21:51 -- common/autotest_common.sh@1111 -- # unittest_rpc 00:07:57.893 00:21:51 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:07:57.893 00:07:57.893 00:07:57.893 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.893 http://cunit.sourceforge.net/ 00:07:57.893 00:07:57.893 00:07:57.893 Suite: rpc 00:07:57.893 Test: test_jsonrpc_handler ...passed 00:07:57.893 Test: test_spdk_rpc_is_method_allowed ...passed 00:07:57.893 Test: test_rpc_get_methods ...[2024-04-24 00:21:51.639973] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:07:57.893 passed 00:07:57.893 Test: test_rpc_spdk_get_version ...passed 00:07:57.893 Test: test_spdk_rpc_listen_close ...passed 00:07:57.893 Test: test_rpc_run_multiple_servers ...passed 00:07:57.893 00:07:57.893 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.893 suites 1 1 n/a 0 0 00:07:57.893 tests 6 6 6 0 0 00:07:57.893 asserts 23 23 23 0 n/a 00:07:57.893 00:07:57.893 Elapsed time = 0.001 seconds 00:07:57.893 00:07:57.893 real 0m0.039s 00:07:57.893 user 0m0.020s 00:07:57.893 sys 0m0.020s 00:07:57.893 00:21:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.893 00:21:51 -- common/autotest_common.sh@10 -- # set +x 00:07:57.893 ************************************ 00:07:57.893 END TEST unittest_rpc 00:07:57.893 ************************************ 00:07:58.152 00:21:51 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:58.152 00:21:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.152 00:21:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.152 00:21:51 -- common/autotest_common.sh@10 -- # set +x 00:07:58.152 ************************************ 00:07:58.152 START TEST unittest_notify 00:07:58.152 ************************************ 00:07:58.152 00:21:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:58.152 00:07:58.152 00:07:58.152 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.152 http://cunit.sourceforge.net/ 00:07:58.152 00:07:58.152 00:07:58.152 Suite: app_suite 00:07:58.152 Test: notify ...passed 00:07:58.152 00:07:58.152 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.152 suites 1 1 n/a 0 0 00:07:58.152 tests 1 1 1 0 0 00:07:58.152 asserts 13 13 13 0 n/a 00:07:58.152 00:07:58.152 Elapsed time = 0.000 seconds 00:07:58.152 00:07:58.152 real 0m0.030s 00:07:58.152 user 0m0.018s 00:07:58.152 sys 0m0.012s 00:07:58.152 00:21:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:58.152 ************************************ 00:07:58.152 END TEST unittest_notify 00:07:58.152 ************************************ 00:07:58.152 00:21:51 -- common/autotest_common.sh@10 -- # set +x 00:07:58.152 00:21:51 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:07:58.152 00:21:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:58.152 00:21:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.152 00:21:51 -- common/autotest_common.sh@10 -- # set +x 00:07:58.152 ************************************ 00:07:58.152 START TEST unittest_nvme 00:07:58.152 ************************************ 00:07:58.152 00:21:51 -- common/autotest_common.sh@1111 -- # unittest_nvme 00:07:58.152 00:21:51 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:07:58.152 00:07:58.152 00:07:58.152 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.152 http://cunit.sourceforge.net/ 00:07:58.152 00:07:58.152 00:07:58.152 Suite: nvme 00:07:58.152 Test: test_opc_data_transfer ...passed 00:07:58.152 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:07:58.152 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:07:58.152 Test: test_trid_parse_and_compare ...[2024-04-24 00:21:51.921616] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1171:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:07:58.152 [2024-04-24 00:21:51.921956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1228:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:58.152 [2024-04-24 00:21:51.922072] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1183:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:07:58.152 [2024-04-24 00:21:51.922121] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1228:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:58.152 [2024-04-24 00:21:51.922171] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1194:parse_next_key: *ERROR*: Key without value 00:07:58.152 [2024-04-24 00:21:51.922274] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1228:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:58.152 passed 00:07:58.152 Test: test_trid_trtype_str ...passed 00:07:58.152 Test: test_trid_adrfam_str ...passed 00:07:58.152 Test: test_nvme_ctrlr_probe ...passed 00:07:58.152 Test: test_spdk_nvme_probe ...[2024-04-24 00:21:51.922496] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:58.152 [2024-04-24 00:21:51.922599] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:58.152 [2024-04-24 00:21:51.922652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:58.152 [2024-04-24 00:21:51.922761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:07:58.152 passed 00:07:58.152 Test: test_spdk_nvme_connect ...[2024-04-24 00:21:51.922825] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:58.152 [2024-04-24 00:21:51.922944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 993:spdk_nvme_connect: *ERROR*: No transport ID specified 00:07:58.152 [2024-04-24 00:21:51.923344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:58.152 passed 00:07:58.152 Test: test_nvme_ctrlr_probe_internal ...[2024-04-24 00:21:51.923424] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1004:spdk_nvme_connect: *ERROR*: Create probe context failed 00:07:58.153 [2024-04-24 00:21:51.923562] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:58.153 passed 00:07:58.153 Test: test_nvme_init_controllers ...[2024-04-24 00:21:51.923611] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:07:58.153 [2024-04-24 00:21:51.923716] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:07:58.153 passed 00:07:58.153 Test: test_nvme_driver_init ...[2024-04-24 00:21:51.923836] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:07:58.153 [2024-04-24 00:21:51.923882] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:58.411 [2024-04-24 00:21:52.032243] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:07:58.411 passed 00:07:58.411 Test: test_spdk_nvme_detach ...[2024-04-24 00:21:52.032479] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:07:58.411 passed 00:07:58.411 Test: test_nvme_completion_poll_cb ...passed 00:07:58.411 Test: test_nvme_user_copy_cmd_complete ...passed 00:07:58.411 Test: test_nvme_allocate_request_null ...passed 00:07:58.411 Test: test_nvme_allocate_request ...passed 00:07:58.411 Test: test_nvme_free_request ...passed 00:07:58.411 Test: test_nvme_allocate_request_user_copy ...passed 00:07:58.411 Test: test_nvme_robust_mutex_init_shared ...passed 00:07:58.411 Test: test_nvme_request_check_timeout ...passed 00:07:58.411 Test: test_nvme_wait_for_completion ...passed 00:07:58.411 Test: test_spdk_nvme_parse_func ...passed 00:07:58.411 Test: test_spdk_nvme_detach_async ...passed 00:07:58.411 Test: test_nvme_parse_addr ...[2024-04-24 00:21:52.033468] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1581:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:07:58.411 passed 00:07:58.411 00:07:58.411 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.411 suites 1 1 n/a 0 0 00:07:58.411 tests 25 25 25 0 0 00:07:58.411 asserts 326 326 326 0 n/a 00:07:58.411 00:07:58.411 Elapsed time = 0.006 seconds 00:07:58.411 00:21:52 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:07:58.411 00:07:58.411 00:07:58.411 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.411 http://cunit.sourceforge.net/ 00:07:58.411 00:07:58.411 00:07:58.411 Suite: nvme_ctrlr 00:07:58.411 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-04-24 00:21:52.077367] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.411 passed 00:07:58.411 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-04-24 00:21:52.079374] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.411 passed 00:07:58.411 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-04-24 00:21:52.080657] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.411 passed 00:07:58.411 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-04-24 00:21:52.081901] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.411 passed 00:07:58.411 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-04-24 00:21:52.083144] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.411 [2024-04-24 00:21:52.084324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-24 00:21:52.085615] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-24 00:21:52.086782] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:58.411 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-04-24 00:21:52.089189] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.411 [2024-04-24 00:21:52.091551] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-24 00:21:52.092749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:58.411 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-04-24 00:21:52.095119] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.412 [2024-04-24 00:21:52.096329] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-24 00:21:52.098637] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:58.412 Test: test_nvme_ctrlr_init_delay ...[2024-04-24 00:21:52.101041] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.412 passed 00:07:58.412 Test: test_alloc_io_qpair_rr_1 ...[2024-04-24 00:21:52.102280] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.412 [2024-04-24 00:21:52.102441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:58.412 [2024-04-24 00:21:52.102650] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 398:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:58.412 passed 00:07:58.412 Test: test_ctrlr_get_default_ctrlr_opts ...[2024-04-24 00:21:52.102717] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 398:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:58.412 [2024-04-24 00:21:52.102772] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 398:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:58.412 passed 00:07:58.412 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:07:58.412 Test: test_alloc_io_qpair_wrr_1 ...[2024-04-24 00:21:52.102902] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.412 passed 00:07:58.412 Test: test_alloc_io_qpair_wrr_2 ...[2024-04-24 00:21:52.103090] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.412 [2024-04-24 00:21:52.103271] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:58.412 passed 00:07:58.412 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-04-24 00:21:52.103604] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4857:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:07:58.412 [2024-04-24 00:21:52.103810] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4894:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:58.412 [2024-04-24 00:21:52.103950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4934:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:07:58.412 [2024-04-24 00:21:52.104071] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4894:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:58.412 passed 00:07:58.412 Test: test_nvme_ctrlr_fail ...[2024-04-24 00:21:52.104193] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:07:58.412 passed 00:07:58.412 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:07:58.412 Test: test_nvme_ctrlr_set_supported_features ...passed 00:07:58.412 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:07:58.412 Test: test_nvme_ctrlr_test_active_ns ...[2024-04-24 00:21:52.104579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.669 passed 00:07:58.669 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:07:58.669 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:07:58.669 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:07:58.669 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-04-24 00:21:52.449869] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.669 passed 00:07:58.669 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-04-24 00:21:52.456798] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.669 passed 00:07:58.670 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-04-24 00:21:52.458013] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.670 [2024-04-24 00:21:52.458075] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2882:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:07:58.935 passed 00:07:58.935 Test: test_alloc_io_qpair_fail ...[2024-04-24 00:21:52.459212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ctrlr_add_remove_process ...passed 00:07:58.935 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:07:58.935 Test: test_nvme_ctrlr_set_state ...[2024-04-24 00:21:52.459345] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 510:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-04-24 00:21:52.459473] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:07:58.935 [2024-04-24 00:21:52.459514] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-04-24 00:21:52.481921] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ctrlr_ns_mgmt ...[2024-04-24 00:21:52.523643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ctrlr_reset ...[2024-04-24 00:21:52.525375] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ctrlr_aer_callback ...[2024-04-24 00:21:52.525835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-04-24 00:21:52.527389] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:07:58.935 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:07:58.935 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-04-24 00:21:52.529317] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:07:58.935 Test: test_nvme_ctrlr_ana_resize ...[2024-04-24 00:21:52.530964] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:07:58.935 Test: test_nvme_transport_ctrlr_ready ...[2024-04-24 00:21:52.532863] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:07:58.935 [2024-04-24 00:21:52.532965] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4079:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ctrlr_disable ...[2024-04-24 00:21:52.533037] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:58.935 passed 00:07:58.935 00:07:58.935 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.935 suites 1 1 n/a 0 0 00:07:58.935 tests 43 43 43 0 0 00:07:58.935 asserts 10418 10418 10418 0 n/a 00:07:58.935 00:07:58.935 Elapsed time = 0.417 seconds 00:07:58.935 00:21:52 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:07:58.935 00:07:58.935 00:07:58.935 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.935 http://cunit.sourceforge.net/ 00:07:58.935 00:07:58.935 00:07:58.935 Suite: nvme_ctrlr_cmd 00:07:58.935 Test: test_get_log_pages ...passed 00:07:58.935 Test: test_set_feature_cmd ...passed 00:07:58.935 Test: test_set_feature_ns_cmd ...passed 00:07:58.935 Test: test_get_feature_cmd ...passed 00:07:58.935 Test: test_get_feature_ns_cmd ...passed 00:07:58.935 Test: test_abort_cmd ...passed 00:07:58.935 Test: test_set_host_id_cmds ...[2024-04-24 00:21:52.605967] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:07:58.935 passed 00:07:58.935 Test: test_io_cmd_raw_no_payload_build ...passed 00:07:58.935 Test: test_io_raw_cmd ...passed 00:07:58.935 Test: test_io_raw_cmd_with_md ...passed 00:07:58.935 Test: test_namespace_attach ...passed 00:07:58.935 Test: test_namespace_detach ...passed 00:07:58.935 Test: test_namespace_create ...passed 00:07:58.935 Test: test_namespace_delete ...passed 00:07:58.935 Test: test_doorbell_buffer_config ...passed 00:07:58.935 Test: test_format_nvme ...passed 00:07:58.935 Test: test_fw_commit ...passed 00:07:58.935 Test: test_fw_image_download ...passed 00:07:58.935 Test: test_sanitize ...passed 00:07:58.935 Test: test_directive ...passed 00:07:58.935 Test: test_nvme_request_add_abort ...passed 00:07:58.935 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:07:58.935 Test: test_nvme_ctrlr_cmd_identify ...passed 00:07:58.935 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:07:58.935 00:07:58.935 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.935 suites 1 1 n/a 0 0 00:07:58.935 tests 24 24 24 0 0 00:07:58.935 asserts 198 198 198 0 n/a 00:07:58.935 00:07:58.935 Elapsed time = 0.001 seconds 00:07:58.935 00:21:52 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:07:58.935 00:07:58.935 00:07:58.935 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.935 http://cunit.sourceforge.net/ 00:07:58.935 00:07:58.935 00:07:58.935 Suite: nvme_ctrlr_cmd 00:07:58.935 Test: test_geometry_cmd ...passed 00:07:58.935 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:07:58.935 00:07:58.935 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.935 suites 1 1 n/a 0 0 00:07:58.935 tests 2 2 2 0 0 00:07:58.935 asserts 7 7 7 0 n/a 00:07:58.935 00:07:58.935 Elapsed time = 0.000 seconds 00:07:58.935 00:21:52 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:07:58.935 00:07:58.935 00:07:58.935 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.935 http://cunit.sourceforge.net/ 00:07:58.935 00:07:58.935 00:07:58.935 Suite: nvme 00:07:58.935 Test: test_nvme_ns_construct ...passed 00:07:58.935 Test: test_nvme_ns_uuid ...passed 00:07:58.935 Test: test_nvme_ns_csi ...passed 00:07:58.935 Test: test_nvme_ns_data ...passed 00:07:58.935 Test: test_nvme_ns_set_identify_data ...passed 00:07:58.935 Test: test_spdk_nvme_ns_get_values ...passed 00:07:58.935 Test: test_spdk_nvme_ns_is_active ...passed 00:07:58.935 Test: spdk_nvme_ns_supports ...passed 00:07:58.935 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:07:58.935 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:07:58.935 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:07:58.935 Test: test_nvme_ns_find_id_desc ...passed 00:07:58.935 00:07:58.935 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.935 suites 1 1 n/a 0 0 00:07:58.935 tests 12 12 12 0 0 00:07:58.935 asserts 83 83 83 0 n/a 00:07:58.935 00:07:58.935 Elapsed time = 0.001 seconds 00:07:58.935 00:21:52 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:07:58.935 00:07:58.935 00:07:58.935 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.935 http://cunit.sourceforge.net/ 00:07:58.935 00:07:58.935 00:07:58.935 Suite: nvme_ns_cmd 00:07:58.935 Test: split_test ...passed 00:07:58.935 Test: split_test2 ...passed 00:07:58.935 Test: split_test3 ...passed 00:07:58.935 Test: split_test4 ...passed 00:07:58.935 Test: test_nvme_ns_cmd_flush ...passed 00:07:58.935 Test: test_nvme_ns_cmd_dataset_management ...passed 00:07:58.935 Test: test_nvme_ns_cmd_copy ...passed 00:07:58.935 Test: test_io_flags ...[2024-04-24 00:21:52.709293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:07:58.935 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:07:58.935 Test: test_nvme_ns_cmd_reservation_register ...passed 00:07:58.935 Test: test_nvme_ns_cmd_reservation_release ...passed 00:07:58.935 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:07:58.935 Test: test_nvme_ns_cmd_reservation_report ...passed 00:07:58.935 Test: test_cmd_child_request ...passed 00:07:58.935 Test: test_nvme_ns_cmd_readv ...passed 00:07:58.935 Test: test_nvme_ns_cmd_read_with_md ...passed 00:07:58.935 Test: test_nvme_ns_cmd_writev ...[2024-04-24 00:21:52.710821] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ns_cmd_write_with_md ...passed 00:07:58.935 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:07:58.935 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:07:58.935 Test: test_nvme_ns_cmd_comparev ...passed 00:07:58.935 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:07:58.935 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:07:58.935 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:07:58.935 Test: test_nvme_ns_cmd_setup_request ...passed 00:07:58.935 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:07:58.935 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-04-24 00:21:52.712846] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:58.935 passed 00:07:58.935 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-04-24 00:21:52.712977] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:58.935 passed 00:07:58.935 Test: test_nvme_ns_cmd_verify ...passed 00:07:58.935 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:07:58.935 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:07:58.935 00:07:58.935 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.935 suites 1 1 n/a 0 0 00:07:58.935 tests 32 32 32 0 0 00:07:58.935 asserts 550 550 550 0 n/a 00:07:58.935 00:07:58.935 Elapsed time = 0.005 seconds 00:07:59.194 00:21:52 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:07:59.194 00:07:59.194 00:07:59.194 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.194 http://cunit.sourceforge.net/ 00:07:59.194 00:07:59.194 00:07:59.194 Suite: nvme_ns_cmd 00:07:59.194 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:07:59.194 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:07:59.194 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:07:59.194 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:07:59.194 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:07:59.194 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:07:59.194 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:07:59.194 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:07:59.194 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:07:59.194 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:07:59.194 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:07:59.194 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:07:59.194 00:07:59.194 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.194 suites 1 1 n/a 0 0 00:07:59.194 tests 12 12 12 0 0 00:07:59.194 asserts 123 123 123 0 n/a 00:07:59.194 00:07:59.194 Elapsed time = 0.002 seconds 00:07:59.194 00:21:52 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:07:59.194 00:07:59.194 00:07:59.194 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.194 http://cunit.sourceforge.net/ 00:07:59.194 00:07:59.194 00:07:59.194 Suite: nvme_qpair 00:07:59.194 Test: test3 ...passed 00:07:59.194 Test: test_ctrlr_failed ...passed 00:07:59.194 Test: struct_packing ...passed 00:07:59.194 Test: test_nvme_qpair_process_completions ...[2024-04-24 00:21:52.793070] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:59.194 [2024-04-24 00:21:52.793603] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:59.194 [2024-04-24 00:21:52.793699] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:59.194 passed 00:07:59.194 Test: test_nvme_completion_is_retry ...[2024-04-24 00:21:52.793824] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:07:59.194 passed 00:07:59.194 Test: test_get_status_string ...passed 00:07:59.194 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:07:59.194 Test: test_nvme_qpair_submit_request ...passed 00:07:59.195 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:07:59.195 Test: test_nvme_qpair_manual_complete_request ...passed 00:07:59.195 Test: test_nvme_qpair_init_deinit ...[2024-04-24 00:21:52.794357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:59.195 passed 00:07:59.195 Test: test_nvme_get_sgl_print_info ...passed 00:07:59.195 00:07:59.195 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.195 suites 1 1 n/a 0 0 00:07:59.195 tests 12 12 12 0 0 00:07:59.195 asserts 154 154 154 0 n/a 00:07:59.195 00:07:59.195 Elapsed time = 0.002 seconds 00:07:59.195 00:21:52 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:07:59.195 00:07:59.195 00:07:59.195 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.195 http://cunit.sourceforge.net/ 00:07:59.195 00:07:59.195 00:07:59.195 Suite: nvme_pcie 00:07:59.195 Test: test_prp_list_append ...[2024-04-24 00:21:52.833398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:59.195 [2024-04-24 00:21:52.834099] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:07:59.195 [2024-04-24 00:21:52.834166] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:07:59.195 [2024-04-24 00:21:52.834770] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:59.195 [2024-04-24 00:21:52.834896] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:59.195 passed 00:07:59.195 Test: test_nvme_pcie_hotplug_monitor ...passed 00:07:59.195 Test: test_shadow_doorbell_update ...passed 00:07:59.195 Test: test_build_contig_hw_sgl_request ...passed 00:07:59.195 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:07:59.195 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:07:59.195 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:07:59.195 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-04-24 00:21:52.835883] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:59.195 passed 00:07:59.195 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:07:59.195 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:07:59.195 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-04-24 00:21:52.836253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:07:59.195 passed 00:07:59.195 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-04-24 00:21:52.836558] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:07:59.195 passed 00:07:59.195 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-04-24 00:21:52.836634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:07:59.195 passed 00:07:59.195 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-04-24 00:21:52.836989] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:07:59.195 passed 00:07:59.195 00:07:59.195 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.195 suites 1 1 n/a 0 0 00:07:59.195 tests 14 14 14 0 0 00:07:59.195 asserts 235 235 235 0 n/a 00:07:59.195 00:07:59.195 Elapsed time = 0.004 seconds 00:07:59.195 00:21:52 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:07:59.195 00:07:59.195 00:07:59.195 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.195 http://cunit.sourceforge.net/ 00:07:59.195 00:07:59.195 00:07:59.195 Suite: nvme_ns_cmd 00:07:59.195 Test: nvme_poll_group_create_test ...passed 00:07:59.195 Test: nvme_poll_group_add_remove_test ...passed 00:07:59.195 Test: nvme_poll_group_process_completions ...passed 00:07:59.195 Test: nvme_poll_group_destroy_test ...passed 00:07:59.195 Test: nvme_poll_group_get_free_stats ...passed 00:07:59.195 00:07:59.195 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.195 suites 1 1 n/a 0 0 00:07:59.195 tests 5 5 5 0 0 00:07:59.195 asserts 75 75 75 0 n/a 00:07:59.195 00:07:59.195 Elapsed time = 0.000 seconds 00:07:59.195 00:21:52 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:07:59.195 00:07:59.195 00:07:59.195 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.195 http://cunit.sourceforge.net/ 00:07:59.195 00:07:59.195 00:07:59.195 Suite: nvme_quirks 00:07:59.195 Test: test_nvme_quirks_striping ...passed 00:07:59.195 00:07:59.195 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.195 suites 1 1 n/a 0 0 00:07:59.195 tests 1 1 1 0 0 00:07:59.195 asserts 5 5 5 0 n/a 00:07:59.195 00:07:59.195 Elapsed time = 0.000 seconds 00:07:59.195 00:21:52 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:07:59.195 00:07:59.195 00:07:59.195 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.195 http://cunit.sourceforge.net/ 00:07:59.195 00:07:59.195 00:07:59.195 Suite: nvme_tcp 00:07:59.195 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:07:59.195 Test: test_nvme_tcp_build_iovs ...passed 00:07:59.195 Test: test_nvme_tcp_build_sgl_request ...[2024-04-24 00:21:52.940867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 824:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffcb634d340, and the iovcnt=16, remaining_size=28672 00:07:59.195 passed 00:07:59.195 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:07:59.195 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:07:59.195 Test: test_nvme_tcp_req_complete_safe ...passed 00:07:59.195 Test: test_nvme_tcp_req_get ...passed 00:07:59.195 Test: test_nvme_tcp_req_init ...passed 00:07:59.195 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:07:59.195 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:07:59.195 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:07:59.195 Test: test_nvme_tcp_alloc_reqs ...[2024-04-24 00:21:52.941891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634f070 is same with the state(6) to be set 00:07:59.195 passed 00:07:59.195 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:07:59.195 Test: test_nvme_tcp_pdu_ch_handle ...[2024-04-24 00:21:52.942342] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634e200 is same with the state(5) to be set 00:07:59.195 [2024-04-24 00:21:52.942450] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffcb634ed50 00:07:59.195 [2024-04-24 00:21:52.942523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1223:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:07:59.195 [2024-04-24 00:21:52.942677] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634e6c0 is same with the state(5) to be set 00:07:59.195 [2024-04-24 00:21:52.942878] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1174:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:07:59.195 [2024-04-24 00:21:52.943031] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634e6c0 is same with the state(5) to be set 00:07:59.195 [2024-04-24 00:21:52.943108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:07:59.195 [2024-04-24 00:21:52.943170] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634e6c0 is same with the state(5) to be set 00:07:59.195 [2024-04-24 00:21:52.943254] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634e6c0 is same with the state(5) to be set 00:07:59.195 [2024-04-24 00:21:52.943328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634e6c0 is same with the state(5) to be set 00:07:59.195 [2024-04-24 00:21:52.943433] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634e6c0 is same with the state(5) to be set 00:07:59.195 [2024-04-24 00:21:52.943506] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634e6c0 is same with the state(5) to be set 00:07:59.195 [2024-04-24 00:21:52.943591] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634e6c0 is same with the state(5) to be set 00:07:59.195 passed 00:07:59.195 Test: test_nvme_tcp_qpair_connect_sock ...[2024-04-24 00:21:52.943861] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2321:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:07:59.195 [2024-04-24 00:21:52.943941] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:59.195 [2024-04-24 00:21:52.944286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:59.195 passed 00:07:59.195 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:07:59.195 Test: test_nvme_tcp_c2h_payload_handle ...[2024-04-24 00:21:52.944449] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1338:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffcb634e890): PDU Sequence Error 00:07:59.195 passed 00:07:59.195 Test: test_nvme_tcp_icresp_handle ...[2024-04-24 00:21:52.944541] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1564:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:07:59.195 [2024-04-24 00:21:52.944618] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1571:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:07:59.195 [2024-04-24 00:21:52.944692] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634e210 is same with the state(5) to be set 00:07:59.195 [2024-04-24 00:21:52.944764] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1580:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:07:59.195 [2024-04-24 00:21:52.944841] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634e210 is same with the state(5) to be set 00:07:59.195 [2024-04-24 00:21:52.944925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634e210 is same with the state(0) to be set 00:07:59.195 passed 00:07:59.195 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:07:59.195 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-04-24 00:21:52.945030] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1338:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffcb634ed50): PDU Sequence Error 00:07:59.195 [2024-04-24 00:21:52.945160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1641:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffcb634d4e0 00:07:59.195 passed 00:07:59.195 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:07:59.195 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-04-24 00:21:52.945462] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffcb634cb60, errno=0, rc=0 00:07:59.195 [2024-04-24 00:21:52.945549] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634cb60 is same with the state(5) to be set 00:07:59.195 [2024-04-24 00:21:52.945678] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcb634cb60 is same with the state(5) to be set 00:07:59.195 [2024-04-24 00:21:52.945773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffcb634cb60 (0): Success 00:07:59.195 [2024-04-24 00:21:52.945856] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffcb634cb60 (0): Success 00:07:59.195 passed 00:07:59.453 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-04-24 00:21:53.091204] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:59.453 passed 00:07:59.453 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:07:59.453 Test: test_nvme_tcp_poll_group_get_stats ...[2024-04-24 00:21:53.091355] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:59.453 [2024-04-24 00:21:53.091618] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2952:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:59.453 passed 00:07:59.453 Test: test_nvme_tcp_ctrlr_construct ...[2024-04-24 00:21:53.091692] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2952:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:59.453 [2024-04-24 00:21:53.091976] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:59.453 [2024-04-24 00:21:53.092041] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:59.453 [2024-04-24 00:21:53.092201] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2321:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:07:59.453 [2024-04-24 00:21:53.092294] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:59.453 [2024-04-24 00:21:53.092491] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000000c40 with addr=192.168.1.78, port=23 00:07:59.453 passed 00:07:59.453 Test: test_nvme_tcp_qpair_submit_request ...[2024-04-24 00:21:53.092602] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:59.453 [2024-04-24 00:21:53.092830] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 824:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000000c80, and the iovcnt=1, remaining_size=1024 00:07:59.453 [2024-04-24 00:21:53.092925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1017:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:07:59.453 passed 00:07:59.453 00:07:59.453 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.453 suites 1 1 n/a 0 0 00:07:59.453 tests 27 27 27 0 0 00:07:59.453 asserts 624 624 624 0 n/a 00:07:59.453 00:07:59.453 Elapsed time = 0.152 seconds 00:07:59.453 00:21:53 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:07:59.453 00:07:59.453 00:07:59.453 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.453 http://cunit.sourceforge.net/ 00:07:59.453 00:07:59.453 00:07:59.453 Suite: nvme_transport 00:07:59.453 Test: test_nvme_get_transport ...passed 00:07:59.453 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:07:59.453 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:07:59.453 Test: test_nvme_transport_poll_group_add_remove ...passed 00:07:59.453 Test: test_ctrlr_get_memory_domains ...passed 00:07:59.453 00:07:59.453 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.453 suites 1 1 n/a 0 0 00:07:59.453 tests 5 5 5 0 0 00:07:59.453 asserts 28 28 28 0 n/a 00:07:59.453 00:07:59.453 Elapsed time = 0.000 seconds 00:07:59.453 00:21:53 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:07:59.453 00:07:59.453 00:07:59.453 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.453 http://cunit.sourceforge.net/ 00:07:59.453 00:07:59.453 00:07:59.453 Suite: nvme_io_msg 00:07:59.453 Test: test_nvme_io_msg_send ...passed 00:07:59.453 Test: test_nvme_io_msg_process ...passed 00:07:59.453 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:07:59.453 00:07:59.453 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.453 suites 1 1 n/a 0 0 00:07:59.453 tests 3 3 3 0 0 00:07:59.453 asserts 56 56 56 0 n/a 00:07:59.453 00:07:59.453 Elapsed time = 0.000 seconds 00:07:59.453 00:21:53 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:07:59.453 00:07:59.453 00:07:59.453 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.453 http://cunit.sourceforge.net/ 00:07:59.453 00:07:59.453 00:07:59.453 Suite: nvme_pcie_common 00:07:59.453 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-04-24 00:21:53.212702] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:07:59.453 passed 00:07:59.453 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:07:59.453 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:07:59.454 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-04-24 00:21:53.213327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:07:59.454 [2024-04-24 00:21:53.213435] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:07:59.454 passed 00:07:59.454 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-04-24 00:21:53.213487] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:07:59.454 passed 00:07:59.454 Test: test_nvme_pcie_poll_group_get_stats ...[2024-04-24 00:21:53.213878] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:59.454 [2024-04-24 00:21:53.213923] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:59.454 passed 00:07:59.454 00:07:59.454 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.454 suites 1 1 n/a 0 0 00:07:59.454 tests 6 6 6 0 0 00:07:59.454 asserts 148 148 148 0 n/a 00:07:59.454 00:07:59.454 Elapsed time = 0.001 seconds 00:07:59.454 00:21:53 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:07:59.712 00:07:59.712 00:07:59.712 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.712 http://cunit.sourceforge.net/ 00:07:59.712 00:07:59.712 00:07:59.712 Suite: nvme_fabric 00:07:59.712 Test: test_nvme_fabric_prop_set_cmd ...passed 00:07:59.712 Test: test_nvme_fabric_prop_get_cmd ...passed 00:07:59.712 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:07:59.713 Test: test_nvme_fabric_discover_probe ...passed 00:07:59.713 Test: test_nvme_fabric_qpair_connect ...[2024-04-24 00:21:53.244991] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:07:59.713 passed 00:07:59.713 00:07:59.713 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.713 suites 1 1 n/a 0 0 00:07:59.713 tests 5 5 5 0 0 00:07:59.713 asserts 60 60 60 0 n/a 00:07:59.713 00:07:59.713 Elapsed time = 0.001 seconds 00:07:59.713 00:21:53 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:07:59.713 00:07:59.713 00:07:59.713 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.713 http://cunit.sourceforge.net/ 00:07:59.713 00:07:59.713 00:07:59.713 Suite: nvme_opal 00:07:59.713 Test: test_opal_nvme_security_recv_send_done ...passed 00:07:59.713 Test: test_opal_add_short_atom_header ...passed 00:07:59.713 00:07:59.713 [2024-04-24 00:21:53.279405] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:07:59.713 Run Summary: Type Total Ran Passed Failed Inactive 00:07:59.713 suites 1 1 n/a 0 0 00:07:59.713 tests 2 2 2 0 0 00:07:59.713 asserts 22 22 22 0 n/a 00:07:59.713 00:07:59.713 Elapsed time = 0.000 seconds 00:07:59.713 00:07:59.713 real 0m1.397s 00:07:59.713 user 0m0.699s 00:07:59.713 sys 0m0.558s 00:07:59.713 00:21:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:59.713 00:21:53 -- common/autotest_common.sh@10 -- # set +x 00:07:59.713 ************************************ 00:07:59.713 END TEST unittest_nvme 00:07:59.713 ************************************ 00:07:59.713 00:21:53 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:59.713 00:21:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:59.713 00:21:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.713 00:21:53 -- common/autotest_common.sh@10 -- # set +x 00:07:59.713 ************************************ 00:07:59.713 START TEST unittest_log 00:07:59.713 ************************************ 00:07:59.713 00:21:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:59.713 00:07:59.713 00:07:59.713 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.713 http://cunit.sourceforge.net/ 00:07:59.713 00:07:59.713 00:07:59.713 Suite: log 00:07:59.713 Test: log_test ...[2024-04-24 00:21:53.405815] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:07:59.713 [2024-04-24 00:21:53.406367] log_ut.c: 57:log_test: *DEBUG*: log test 00:07:59.713 log dump test: 00:07:59.713 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:07:59.713 spdk dump test: 00:07:59.713 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:07:59.713 spdk dump test: 00:07:59.713 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:07:59.713 00000010 65 20 63 68 61 72 73 e chars 00:07:59.713 passed 00:08:00.646 Test: deprecation ...passed 00:08:00.646 00:08:00.646 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.646 suites 1 1 n/a 0 0 00:08:00.646 tests 2 2 2 0 0 00:08:00.646 asserts 73 73 73 0 n/a 00:08:00.646 00:08:00.646 Elapsed time = 0.001 seconds 00:08:00.646 00:08:00.646 real 0m1.037s 00:08:00.646 user 0m0.027s 00:08:00.646 sys 0m0.011s 00:08:00.646 00:21:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:00.646 ************************************ 00:08:00.646 00:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:00.646 END TEST unittest_log 00:08:00.646 ************************************ 00:08:00.903 00:21:54 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:00.903 00:21:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:00.903 00:21:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.903 00:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:00.903 ************************************ 00:08:00.903 START TEST unittest_lvol 00:08:00.903 ************************************ 00:08:00.903 00:21:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:00.903 00:08:00.903 00:08:00.903 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.903 http://cunit.sourceforge.net/ 00:08:00.903 00:08:00.903 00:08:00.903 Suite: lvol 00:08:00.903 Test: lvs_init_unload_success ...[2024-04-24 00:21:54.544391] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:08:00.903 passed 00:08:00.903 Test: lvs_init_destroy_success ...[2024-04-24 00:21:54.545037] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:08:00.903 passed 00:08:00.903 Test: lvs_init_opts_success ...passed 00:08:00.903 Test: lvs_unload_lvs_is_null_fail ...[2024-04-24 00:21:54.545329] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:08:00.903 passed 00:08:00.903 Test: lvs_names ...[2024-04-24 00:21:54.545397] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:08:00.903 [2024-04-24 00:21:54.545460] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:08:00.903 [2024-04-24 00:21:54.545669] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:08:00.903 passed 00:08:00.903 Test: lvol_create_destroy_success ...passed 00:08:00.904 Test: lvol_create_fail ...[2024-04-24 00:21:54.546336] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:08:00.904 passed 00:08:00.904 Test: lvol_destroy_fail ...[2024-04-24 00:21:54.546481] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:08:00.904 [2024-04-24 00:21:54.546837] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:08:00.904 passed 00:08:00.904 Test: lvol_close ...[2024-04-24 00:21:54.547132] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:08:00.904 passed 00:08:00.904 Test: lvol_resize ...[2024-04-24 00:21:54.547194] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:08:00.904 passed 00:08:00.904 Test: lvol_set_read_only ...passed 00:08:00.904 Test: test_lvs_load ...[2024-04-24 00:21:54.548053] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:08:00.904 [2024-04-24 00:21:54.548116] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:08:00.904 passed 00:08:00.904 Test: lvols_load ...[2024-04-24 00:21:54.548387] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:00.904 [2024-04-24 00:21:54.548549] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:00.904 passed 00:08:00.904 Test: lvol_open ...passed 00:08:00.904 Test: lvol_snapshot ...passed 00:08:00.904 Test: lvol_snapshot_fail ...[2024-04-24 00:21:54.549358] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:08:00.904 passed 00:08:00.904 Test: lvol_clone ...passed 00:08:00.904 Test: lvol_clone_fail ...[2024-04-24 00:21:54.550053] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:08:00.904 passed 00:08:00.904 Test: lvol_iter_clones ...passed 00:08:00.904 Test: lvol_refcnt ...[2024-04-24 00:21:54.550714] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 8dfa42ec-99a4-4b3c-99ee-ef142e9f14a0 because it is still open 00:08:00.904 passed 00:08:00.904 Test: lvol_names ...[2024-04-24 00:21:54.550983] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:00.904 [2024-04-24 00:21:54.551107] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:00.904 [2024-04-24 00:21:54.551412] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:08:00.904 passed 00:08:00.904 Test: lvol_create_thin_provisioned ...passed 00:08:00.904 Test: lvol_rename ...[2024-04-24 00:21:54.551923] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:00.904 [2024-04-24 00:21:54.552053] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:08:00.904 passed 00:08:00.904 Test: lvs_rename ...[2024-04-24 00:21:54.552377] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:08:00.904 passed 00:08:00.904 Test: lvol_inflate ...[2024-04-24 00:21:54.552637] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:00.904 passed 00:08:00.904 Test: lvol_decouple_parent ...[2024-04-24 00:21:54.552944] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:00.904 passed 00:08:00.904 Test: lvol_get_xattr ...passed 00:08:00.904 Test: lvol_esnap_reload ...passed 00:08:00.904 Test: lvol_esnap_create_bad_args ...[2024-04-24 00:21:54.553512] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:08:00.904 [2024-04-24 00:21:54.553583] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:00.904 [2024-04-24 00:21:54.553669] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:08:00.904 [2024-04-24 00:21:54.553844] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:00.904 [2024-04-24 00:21:54.553982] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:08:00.904 passed 00:08:00.904 Test: lvol_esnap_create_delete ...passed 00:08:00.904 Test: lvol_esnap_load_esnaps ...[2024-04-24 00:21:54.554295] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:08:00.904 passed 00:08:00.904 Test: lvol_esnap_missing ...[2024-04-24 00:21:54.554469] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:00.904 [2024-04-24 00:21:54.554541] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:00.904 passed 00:08:00.904 Test: lvol_esnap_hotplug ... 00:08:00.904 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:08:00.904 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:08:00.904 [2024-04-24 00:21:54.555367] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol d4c86247-46d6-44dc-9389-7a9833b66b76: failed to create esnap bs_dev: error -12 00:08:00.904 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:08:00.904 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:08:00.904 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:08:00.904 [2024-04-24 00:21:54.555612] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 712a0e29-0fdc-458a-8de0-732b364fe2b8: failed to create esnap bs_dev: error -12 00:08:00.904 [2024-04-24 00:21:54.555747] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol a6e1e2b9-399a-4f41-99a7-3232962ef31d: failed to create esnap bs_dev: error -12 00:08:00.904 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:08:00.904 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:08:00.904 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:08:00.904 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:08:00.904 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:08:00.904 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:08:00.904 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:08:00.904 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:08:00.904 passed 00:08:00.904 Test: lvol_get_by ...passed 00:08:00.904 00:08:00.904 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.904 suites 1 1 n/a 0 0 00:08:00.904 tests 34 34 34 0 0 00:08:00.904 asserts 1439 1439 1439 0 n/a 00:08:00.904 00:08:00.904 Elapsed time = 0.013 seconds 00:08:00.904 00:08:00.904 real 0m0.052s 00:08:00.904 user 0m0.021s 00:08:00.904 sys 0m0.032s 00:08:00.904 00:21:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:00.904 00:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:00.904 ************************************ 00:08:00.904 END TEST unittest_lvol 00:08:00.904 ************************************ 00:08:00.904 00:21:54 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:00.904 00:21:54 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:00.904 00:21:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:00.904 00:21:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.904 00:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:00.904 ************************************ 00:08:00.904 START TEST unittest_nvme_rdma 00:08:00.904 ************************************ 00:08:00.904 00:21:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:00.904 00:08:00.904 00:08:00.904 CUnit - A unit testing framework for C - Version 2.1-3 00:08:00.904 http://cunit.sourceforge.net/ 00:08:00.904 00:08:00.904 00:08:00.904 Suite: nvme_rdma 00:08:00.904 Test: test_nvme_rdma_build_sgl_request ...[2024-04-24 00:21:54.689763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:08:00.904 [2024-04-24 00:21:54.690104] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1632:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:00.904 passed 00:08:00.904 Test: test_nvme_rdma_build_sgl_inline_request ...[2024-04-24 00:21:54.690203] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1688:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:08:00.904 passed 00:08:00.904 Test: test_nvme_rdma_build_contig_request ...[2024-04-24 00:21:54.690302] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1569:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:00.904 passed 00:08:00.904 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:08:00.904 Test: test_nvme_rdma_create_reqs ...[2024-04-24 00:21:54.690444] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:08:00.904 passed 00:08:00.904 Test: test_nvme_rdma_create_rsps ...[2024-04-24 00:21:54.690818] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:08:00.904 passed 00:08:00.904 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-04-24 00:21:54.691047] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:00.904 [2024-04-24 00:21:54.691109] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:00.904 passed 00:08:00.904 Test: test_nvme_rdma_poller_create ...passed 00:08:00.904 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:08:00.904 Test: test_nvme_rdma_ctrlr_construct ...[2024-04-24 00:21:54.691327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:08:00.904 passed 00:08:00.904 Test: test_nvme_rdma_req_put_and_get ...passed 00:08:00.904 Test: test_nvme_rdma_req_init ...passed 00:08:00.904 Test: test_nvme_rdma_validate_cm_event ...[2024-04-24 00:21:54.691597] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:08:00.904 passed 00:08:00.904 Test: test_nvme_rdma_qpair_init ...[2024-04-24 00:21:54.691646] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:08:00.904 passed 00:08:00.905 Test: test_nvme_rdma_qpair_submit_request ...passed 00:08:00.905 Test: test_nvme_rdma_memory_domain ...[2024-04-24 00:21:54.691888] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:08:01.162 passed 00:08:01.162 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:08:01.162 Test: test_rdma_get_memory_translation ...passed 00:08:01.162 Test: test_get_rdma_qpair_from_wc ...passed 00:08:01.162 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:08:01.162 Test: test_nvme_rdma_poll_group_get_stats ...[2024-04-24 00:21:54.691990] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:08:01.162 [2024-04-24 00:21:54.692053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:08:01.162 [2024-04-24 00:21:54.692147] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:01.162 [2024-04-24 00:21:54.692192] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:01.162 passed 00:08:01.162 Test: test_nvme_rdma_qpair_set_poller ...[2024-04-24 00:21:54.692383] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:01.162 [2024-04-24 00:21:54.692456] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:08:01.162 [2024-04-24 00:21:54.692497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe20d4b570 on poll group 0x60c000000040 00:08:01.162 [2024-04-24 00:21:54.692574] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:01.163 [2024-04-24 00:21:54.692613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:08:01.163 [2024-04-24 00:21:54.692651] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe20d4b570 on poll group 0x60c000000040 00:08:01.163 [2024-04-24 00:21:54.692730] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:01.163 passed 00:08:01.163 00:08:01.163 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.163 suites 1 1 n/a 0 0 00:08:01.163 tests 22 22 22 0 0 00:08:01.163 asserts 412 412 412 0 n/a 00:08:01.163 00:08:01.163 Elapsed time = 0.003 seconds 00:08:01.163 00:08:01.163 real 0m0.038s 00:08:01.163 user 0m0.017s 00:08:01.163 sys 0m0.021s 00:08:01.163 00:21:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:01.163 00:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:01.163 ************************************ 00:08:01.163 END TEST unittest_nvme_rdma 00:08:01.163 ************************************ 00:08:01.163 00:21:54 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:01.163 00:21:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.163 00:21:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.163 00:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:01.163 ************************************ 00:08:01.163 START TEST unittest_nvmf_transport 00:08:01.163 ************************************ 00:08:01.163 00:21:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:01.163 00:08:01.163 00:08:01.163 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.163 http://cunit.sourceforge.net/ 00:08:01.163 00:08:01.163 00:08:01.163 Suite: nvmf 00:08:01.163 Test: test_spdk_nvmf_transport_create ...[2024-04-24 00:21:54.822048] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 249:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:08:01.163 [2024-04-24 00:21:54.822492] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 269:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:08:01.163 [2024-04-24 00:21:54.822593] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 273:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:08:01.163 [2024-04-24 00:21:54.822763] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 256:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:08:01.163 passed 00:08:01.163 Test: test_nvmf_transport_poll_group_create ...passed 00:08:01.163 Test: test_spdk_nvmf_transport_opts_init ...[2024-04-24 00:21:54.823123] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 790:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:08:01.163 [2024-04-24 00:21:54.823255] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 795:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:08:01.163 [2024-04-24 00:21:54.823316] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 800:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:08:01.163 passed 00:08:01.163 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:08:01.163 00:08:01.163 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.163 suites 1 1 n/a 0 0 00:08:01.163 tests 4 4 4 0 0 00:08:01.163 asserts 49 49 49 0 n/a 00:08:01.163 00:08:01.163 Elapsed time = 0.002 seconds 00:08:01.163 00:08:01.163 real 0m0.048s 00:08:01.163 user 0m0.024s 00:08:01.163 sys 0m0.023s 00:08:01.163 00:21:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:01.163 00:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:01.163 ************************************ 00:08:01.163 END TEST unittest_nvmf_transport 00:08:01.163 ************************************ 00:08:01.163 00:21:54 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:01.163 00:21:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.163 00:21:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.163 00:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:01.163 ************************************ 00:08:01.163 START TEST unittest_rdma 00:08:01.163 ************************************ 00:08:01.163 00:21:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:01.421 00:08:01.421 00:08:01.421 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.421 http://cunit.sourceforge.net/ 00:08:01.421 00:08:01.421 00:08:01.421 Suite: rdma_common 00:08:01.421 Test: test_spdk_rdma_pd ...[2024-04-24 00:21:54.952378] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:01.421 [2024-04-24 00:21:54.952830] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:01.421 passed 00:08:01.421 00:08:01.421 Run Summary: Type Total Ran Passed Failed Inactive 00:08:01.421 suites 1 1 n/a 0 0 00:08:01.421 tests 1 1 1 0 0 00:08:01.421 asserts 31 31 31 0 n/a 00:08:01.421 00:08:01.421 Elapsed time = 0.001 seconds 00:08:01.421 00:08:01.421 real 0m0.034s 00:08:01.421 user 0m0.018s 00:08:01.421 sys 0m0.017s 00:08:01.421 00:21:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:01.421 00:21:54 -- common/autotest_common.sh@10 -- # set +x 00:08:01.421 ************************************ 00:08:01.421 END TEST unittest_rdma 00:08:01.421 ************************************ 00:08:01.421 00:21:55 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:01.421 00:21:55 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:01.421 00:21:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:01.421 00:21:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.421 00:21:55 -- common/autotest_common.sh@10 -- # set +x 00:08:01.421 ************************************ 00:08:01.421 START TEST unittest_nvme_cuse 00:08:01.421 ************************************ 00:08:01.421 00:21:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:01.421 00:08:01.421 00:08:01.421 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.421 http://cunit.sourceforge.net/ 00:08:01.421 00:08:01.421 00:08:01.421 Suite: nvme_cuse 00:08:01.421 Test: test_cuse_nvme_submit_io_read_write ...passed 00:08:01.421 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:08:01.421 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:08:01.421 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:08:01.421 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:08:01.421 Test: test_cuse_nvme_submit_io ...[2024-04-24 00:21:55.082480] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:08:01.421 passed 00:08:01.422 Test: test_cuse_nvme_reset ...[2024-04-24 00:21:55.082874] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:08:01.422 passed 00:08:02.356 Test: test_nvme_cuse_stop ...passed 00:08:02.356 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:08:02.356 00:08:02.356 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.356 suites 1 1 n/a 0 0 00:08:02.356 tests 9 9 9 0 0 00:08:02.356 asserts 118 118 118 0 n/a 00:08:02.356 00:08:02.356 Elapsed time = 1.005 seconds 00:08:02.356 00:08:02.356 real 0m1.039s 00:08:02.356 user 0m0.496s 00:08:02.356 sys 0m0.543s 00:08:02.356 00:21:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:02.356 ************************************ 00:08:02.356 END TEST unittest_nvme_cuse 00:08:02.356 ************************************ 00:08:02.356 00:21:56 -- common/autotest_common.sh@10 -- # set +x 00:08:02.615 00:21:56 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:08:02.615 00:21:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:02.615 00:21:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.615 00:21:56 -- common/autotest_common.sh@10 -- # set +x 00:08:02.615 ************************************ 00:08:02.615 START TEST unittest_nvmf 00:08:02.615 ************************************ 00:08:02.615 00:21:56 -- common/autotest_common.sh@1111 -- # unittest_nvmf 00:08:02.615 00:21:56 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:08:02.615 00:08:02.615 00:08:02.615 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.615 http://cunit.sourceforge.net/ 00:08:02.615 00:08:02.615 00:08:02.615 Suite: nvmf 00:08:02.615 Test: test_get_log_page ...[2024-04-24 00:21:56.234362] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2562:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:08:02.615 passed 00:08:02.615 Test: test_process_fabrics_cmd ...passed 00:08:02.615 Test: test_connect ...[2024-04-24 00:21:56.235403] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 956:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:08:02.615 [2024-04-24 00:21:56.235540] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:08:02.615 [2024-04-24 00:21:56.235596] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 995:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:08:02.615 [2024-04-24 00:21:56.235642] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:08:02.615 [2024-04-24 00:21:56.235750] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 830:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:08:02.615 [2024-04-24 00:21:56.235795] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 837:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:08:02.615 [2024-04-24 00:21:56.235916] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 843:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:08:02.615 [2024-04-24 00:21:56.235962] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 870:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:08:02.615 [2024-04-24 00:21:56.236065] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:08:02.615 [2024-04-24 00:21:56.236142] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:08:02.615 [2024-04-24 00:21:56.236474] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 629:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:08:02.615 [2024-04-24 00:21:56.236574] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 635:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:08:02.615 [2024-04-24 00:21:56.236682] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 642:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:08:02.615 [2024-04-24 00:21:56.236759] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 665:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:08:02.615 [2024-04-24 00:21:56.236888] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 242:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:08:02.615 [2024-04-24 00:21:56.237048] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 750:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:08:02.615 [2024-04-24 00:21:56.237125] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 750:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:08:02.615 passed 00:08:02.615 Test: test_get_ns_id_desc_list ...passed 00:08:02.615 Test: test_identify_ns ...[2024-04-24 00:21:56.237422] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:02.615 [2024-04-24 00:21:56.237781] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:08:02.615 [2024-04-24 00:21:56.237933] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:08:02.615 passed 00:08:02.615 Test: test_identify_ns_iocs_specific ...[2024-04-24 00:21:56.238100] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:02.615 [2024-04-24 00:21:56.238416] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:02.615 passed 00:08:02.615 Test: test_reservation_write_exclusive ...passed 00:08:02.615 Test: test_reservation_exclusive_access ...passed 00:08:02.615 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:08:02.615 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:08:02.615 Test: test_reservation_notification_log_page ...passed 00:08:02.615 Test: test_get_dif_ctx ...passed 00:08:02.615 Test: test_set_get_features ...[2024-04-24 00:21:56.239115] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1592:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:02.615 [2024-04-24 00:21:56.239199] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1592:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:02.615 [2024-04-24 00:21:56.239266] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1603:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:08:02.615 [2024-04-24 00:21:56.239321] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1679:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:08:02.615 passed 00:08:02.615 Test: test_identify_ctrlr ...passed 00:08:02.615 Test: test_identify_ctrlr_iocs_specific ...passed 00:08:02.615 Test: test_custom_admin_cmd ...passed 00:08:02.615 Test: test_fused_compare_and_write ...[2024-04-24 00:21:56.239829] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4163:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:08:02.615 [2024-04-24 00:21:56.239894] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4152:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:02.615 [2024-04-24 00:21:56.239941] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4170:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:02.615 passed 00:08:02.615 Test: test_multi_async_event_reqs ...passed 00:08:02.615 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:08:02.615 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:08:02.615 Test: test_multi_async_events ...passed 00:08:02.615 Test: test_rae ...passed 00:08:02.615 Test: test_nvmf_ctrlr_create_destruct ...passed 00:08:02.615 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:08:02.615 Test: test_spdk_nvmf_request_zcopy_start ...[2024-04-24 00:21:56.240528] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4290:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:08:02.615 passed 00:08:02.616 Test: test_zcopy_read ...passed 00:08:02.616 Test: test_zcopy_write ...passed 00:08:02.616 Test: test_nvmf_property_set ...passed 00:08:02.616 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-04-24 00:21:56.240754] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1890:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:02.616 [2024-04-24 00:21:56.240811] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1890:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:02.616 passed 00:08:02.616 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-04-24 00:21:56.240872] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1913:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:08:02.616 [2024-04-24 00:21:56.240914] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1919:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:08:02.616 [2024-04-24 00:21:56.240964] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1931:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:02.616 passed 00:08:02.616 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:08:02.616 00:08:02.616 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.616 suites 1 1 n/a 0 0 00:08:02.616 tests 31 31 31 0 0 00:08:02.616 asserts 951 951 951 0 n/a 00:08:02.616 00:08:02.616 Elapsed time = 0.007 seconds 00:08:02.616 00:21:56 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:08:02.616 00:08:02.616 00:08:02.616 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.616 http://cunit.sourceforge.net/ 00:08:02.616 00:08:02.616 00:08:02.616 Suite: nvmf 00:08:02.616 Test: test_get_rw_params ...passed 00:08:02.616 Test: test_lba_in_range ...passed 00:08:02.616 Test: test_get_dif_ctx ...passed 00:08:02.616 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:08:02.616 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-04-24 00:21:56.287763] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:08:02.616 [2024-04-24 00:21:56.288121] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:08:02.616 passed 00:08:02.616 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-04-24 00:21:56.288267] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:08:02.616 [2024-04-24 00:21:56.288344] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:08:02.616 [2024-04-24 00:21:56.288449] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 960:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:08:02.616 passed 00:08:02.616 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-04-24 00:21:56.288588] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:08:02.616 [2024-04-24 00:21:56.288633] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:08:02.616 passed 00:08:02.616 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:08:02.616 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...[2024-04-24 00:21:56.288734] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:08:02.616 [2024-04-24 00:21:56.288783] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:08:02.616 passed 00:08:02.616 00:08:02.616 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.616 suites 1 1 n/a 0 0 00:08:02.616 tests 9 9 9 0 0 00:08:02.616 asserts 157 157 157 0 n/a 00:08:02.616 00:08:02.616 Elapsed time = 0.001 seconds 00:08:02.616 00:21:56 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:08:02.616 00:08:02.616 00:08:02.616 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.616 http://cunit.sourceforge.net/ 00:08:02.616 00:08:02.616 00:08:02.616 Suite: nvmf 00:08:02.616 Test: test_discovery_log ...passed 00:08:02.616 Test: test_discovery_log_with_filters ...passed 00:08:02.616 00:08:02.616 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.616 suites 1 1 n/a 0 0 00:08:02.616 tests 2 2 2 0 0 00:08:02.616 asserts 238 238 238 0 n/a 00:08:02.616 00:08:02.616 Elapsed time = 0.003 seconds 00:08:02.616 00:21:56 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:08:02.616 00:08:02.616 00:08:02.616 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.616 http://cunit.sourceforge.net/ 00:08:02.616 00:08:02.616 00:08:02.616 Suite: nvmf 00:08:02.616 Test: nvmf_test_create_subsystem ...[2024-04-24 00:21:56.376499] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:08:02.616 [2024-04-24 00:21:56.376876] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:08:02.616 [2024-04-24 00:21:56.377140] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:08:02.616 [2024-04-24 00:21:56.377281] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:08:02.616 [2024-04-24 00:21:56.377342] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:08:02.616 [2024-04-24 00:21:56.377407] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:08:02.616 [2024-04-24 00:21:56.377463] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:08:02.616 [2024-04-24 00:21:56.377540] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:08:02.616 [2024-04-24 00:21:56.377594] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:08:02.616 [2024-04-24 00:21:56.377661] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:08:02.616 [2024-04-24 00:21:56.377713] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:08:02.616 [2024-04-24 00:21:56.377782] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:08:02.616 [2024-04-24 00:21:56.377915] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:08:02.616 [2024-04-24 00:21:56.378061] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:08:02.616 [2024-04-24 00:21:56.378211] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:08:02.616 [2024-04-24 00:21:56.378335] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:08:02.616 [2024-04-24 00:21:56.378486] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:08:02.616 [2024-04-24 00:21:56.378546] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:08:02.616 [2024-04-24 00:21:56.378614] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:02.616 passed 00:08:02.616 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-04-24 00:21:56.378729] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:02.616 [2024-04-24 00:21:56.378798] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:02.616 [2024-04-24 00:21:56.378849] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:02.616 [2024-04-24 00:21:56.379020] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:08:02.616 [2024-04-24 00:21:56.379095] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1881:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:08:02.616 passed 00:08:02.616 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:08:02.616 Test: test_spdk_nvmf_ns_visible ...[2024-04-24 00:21:56.379379] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:08:02.616 passed 00:08:02.616 Test: test_reservation_register ...[2024-04-24 00:21:56.379829] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.616 [2024-04-24 00:21:56.379992] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2990:nvmf_ns_reservation_register: *ERROR*: No registrant 00:08:02.616 passed 00:08:02.616 Test: test_reservation_register_with_ptpl ...passed 00:08:02.616 Test: test_reservation_acquire_preempt_1 ...[2024-04-24 00:21:56.381111] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.616 passed 00:08:02.616 Test: test_reservation_acquire_release_with_ptpl ...passed 00:08:02.616 Test: test_reservation_release ...[2024-04-24 00:21:56.383177] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.616 passed 00:08:02.617 Test: test_reservation_unregister_notification ...[2024-04-24 00:21:56.383507] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.617 passed 00:08:02.617 Test: test_reservation_release_notification ...[2024-04-24 00:21:56.383719] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.617 passed 00:08:02.617 Test: test_reservation_release_notification_write_exclusive ...[2024-04-24 00:21:56.383920] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.617 passed 00:08:02.617 Test: test_reservation_clear_notification ...[2024-04-24 00:21:56.384148] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.617 passed 00:08:02.617 Test: test_reservation_preempt_notification ...[2024-04-24 00:21:56.384358] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2932:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:02.617 passed 00:08:02.617 Test: test_spdk_nvmf_ns_event ...passed 00:08:02.617 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:08:02.617 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:08:02.617 Test: test_spdk_nvmf_subsystem_add_host ...[2024-04-24 00:21:56.385007] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 262:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:08:02.617 [2024-04-24 00:21:56.385089] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:08:02.617 passed 00:08:02.617 Test: test_nvmf_ns_reservation_report ...[2024-04-24 00:21:56.385241] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3295:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:08:02.617 passed 00:08:02.617 Test: test_nvmf_nqn_is_valid ...[2024-04-24 00:21:56.385333] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:08:02.617 [2024-04-24 00:21:56.385379] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ed761b60-7fcb-4ce5-b330-003c46e61d4": uuid is not the correct length 00:08:02.617 passed 00:08:02.617 Test: test_nvmf_ns_reservation_restore ...passed 00:08:02.617 Test: test_nvmf_subsystem_state_change ...[2024-04-24 00:21:56.385416] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:08:02.617 [2024-04-24 00:21:56.385531] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2489:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:08:02.617 passed 00:08:02.617 Test: test_nvmf_reservation_custom_ops ...passed 00:08:02.617 00:08:02.617 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.617 suites 1 1 n/a 0 0 00:08:02.617 tests 23 23 23 0 0 00:08:02.617 asserts 482 482 482 0 n/a 00:08:02.617 00:08:02.617 Elapsed time = 0.010 seconds 00:08:02.875 00:21:56 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:08:02.875 00:08:02.875 00:08:02.875 CUnit - A unit testing framework for C - Version 2.1-3 00:08:02.875 http://cunit.sourceforge.net/ 00:08:02.875 00:08:02.875 00:08:02.875 Suite: nvmf 00:08:02.875 Test: test_nvmf_tcp_create ...[2024-04-24 00:21:56.456347] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 742:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:02.875 passed 00:08:02.875 Test: test_nvmf_tcp_destroy ...passed 00:08:02.875 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:02.875 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:02.875 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:02.875 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:02.875 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:02.875 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-04-24 00:21:56.579755] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.875 passed 00:08:02.875 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:08:02.875 Test: test_nvmf_tcp_icreq_handle ...[2024-04-24 00:21:56.579849] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36895d60 is same with the state(5) to be set 00:08:02.875 [2024-04-24 00:21:56.579963] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36895d60 is same with the state(5) to be set 00:08:02.875 [2024-04-24 00:21:56.580016] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.875 [2024-04-24 00:21:56.580054] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36895d60 is same with the state(5) to be set 00:08:02.875 [2024-04-24 00:21:56.580161] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2102:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:02.875 [2024-04-24 00:21:56.580260] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.875 [2024-04-24 00:21:56.580337] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36895d60 is same with the state(5) to be set 00:08:02.875 [2024-04-24 00:21:56.580389] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2102:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:02.875 [2024-04-24 00:21:56.580440] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36895d60 is same with the state(5) to be set 00:08:02.875 [2024-04-24 00:21:56.580481] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.875 [2024-04-24 00:21:56.580529] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36895d60 is same with the state(5) to be set 00:08:02.875 [2024-04-24 00:21:56.580577] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:02.875 passed 00:08:02.875 Test: test_nvmf_tcp_check_xfer_type ...passed 00:08:02.875 Test: test_nvmf_tcp_invalid_sgl ...[2024-04-24 00:21:56.580649] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36895d60 is same with the state(5) to be set 00:08:02.876 [2024-04-24 00:21:56.580745] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2497:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:02.876 [2024-04-24 00:21:56.580799] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.876 [2024-04-24 00:21:56.580840] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36895d60 is same with the state(5) to be set 00:08:02.876 passed 00:08:02.876 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-04-24 00:21:56.580913] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2229:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffc36896ac0 00:08:02.876 [2024-04-24 00:21:56.581017] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.876 [2024-04-24 00:21:56.581081] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36896220 is same with the state(5) to be set 00:08:02.876 [2024-04-24 00:21:56.581136] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2286:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffc36896220 00:08:02.876 [2024-04-24 00:21:56.581181] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.876 [2024-04-24 00:21:56.581221] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36896220 is same with the state(5) to be set 00:08:02.876 [2024-04-24 00:21:56.581269] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2239:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:02.876 [2024-04-24 00:21:56.581318] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.876 [2024-04-24 00:21:56.581380] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36896220 is same with the state(5) to be set 00:08:02.876 [2024-04-24 00:21:56.581432] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2278:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:02.876 [2024-04-24 00:21:56.581477] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.876 [2024-04-24 00:21:56.581526] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36896220 is same with the state(5) to be set 00:08:02.876 [2024-04-24 00:21:56.581570] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.876 [2024-04-24 00:21:56.581610] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36896220 is same with the state(5) to be set 00:08:02.876 [2024-04-24 00:21:56.581680] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.876 [2024-04-24 00:21:56.581722] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36896220 is same with the state(5) to be set 00:08:02.876 [2024-04-24 00:21:56.581791] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.876 [2024-04-24 00:21:56.581839] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36896220 is same with the state(5) to be set 00:08:02.876 [2024-04-24 00:21:56.581890] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.876 [2024-04-24 00:21:56.581931] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36896220 is same with the state(5) to be set 00:08:02.876 [2024-04-24 00:21:56.582005] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.876 [2024-04-24 00:21:56.582045] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36896220 is same with the state(5) to be set 00:08:02.876 passed 00:08:02.876 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-04-24 00:21:56.582102] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:02.876 [2024-04-24 00:21:56.582135] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc36896220 is same with the state(5) to be set 00:08:02.876 passed 00:08:02.876 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-04-24 00:21:56.609981] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:02.876 passed 00:08:02.876 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-04-24 00:21:56.610098] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:02.876 passed 00:08:02.876 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-04-24 00:21:56.610529] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:02.876 [2024-04-24 00:21:56.610592] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:02.876 passed 00:08:02.876 00:08:02.876 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.876 suites 1 1 n/a 0 0 00:08:02.876 tests 17 17 17 0 0 00:08:02.876 asserts 222 222 222 0 n/a 00:08:02.876 00:08:02.876 Elapsed time = 0.184 seconds 00:08:02.876 [2024-04-24 00:21:56.610849] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:02.876 [2024-04-24 00:21:56.610918] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:03.134 00:21:56 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:03.134 00:08:03.134 00:08:03.134 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.134 http://cunit.sourceforge.net/ 00:08:03.134 00:08:03.134 00:08:03.134 Suite: nvmf 00:08:03.134 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:03.134 00:08:03.134 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.134 suites 1 1 n/a 0 0 00:08:03.134 tests 1 1 1 0 0 00:08:03.134 asserts 17 17 17 0 n/a 00:08:03.134 00:08:03.134 Elapsed time = 0.022 seconds 00:08:03.134 00:08:03.134 real 0m0.589s 00:08:03.134 user 0m0.286s 00:08:03.134 sys 0m0.306s 00:08:03.134 00:21:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:03.134 ************************************ 00:08:03.134 END TEST unittest_nvmf 00:08:03.134 00:21:56 -- common/autotest_common.sh@10 -- # set +x 00:08:03.134 ************************************ 00:08:03.134 00:21:56 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:03.134 00:21:56 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:03.134 00:21:56 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:03.134 00:21:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:03.134 00:21:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.134 00:21:56 -- common/autotest_common.sh@10 -- # set +x 00:08:03.134 ************************************ 00:08:03.134 START TEST unittest_nvmf_rdma 00:08:03.134 ************************************ 00:08:03.134 00:21:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:03.393 00:08:03.393 00:08:03.393 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.393 http://cunit.sourceforge.net/ 00:08:03.393 00:08:03.393 00:08:03.393 Suite: nvmf 00:08:03.393 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-04-24 00:21:56.931305] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:03.393 [2024-04-24 00:21:56.931614] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:03.393 [2024-04-24 00:21:56.931660] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:03.393 passed 00:08:03.393 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:03.393 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:03.393 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:03.393 Test: test_nvmf_rdma_opts_init ...passed 00:08:03.393 Test: test_nvmf_rdma_request_free_data ...passed 00:08:03.393 Test: test_nvmf_rdma_update_ibv_state ...[2024-04-24 00:21:56.933206] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:08:03.393 passed 00:08:03.393 Test: test_nvmf_rdma_resources_create ...[2024-04-24 00:21:56.933277] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:08:03.393 passed 00:08:03.393 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:03.393 Test: test_nvmf_rdma_resize_cq ...[2024-04-24 00:21:56.934795] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:03.393 Using CQ of insufficient size may lead to CQ overrun 00:08:03.393 [2024-04-24 00:21:56.934911] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:03.393 passed 00:08:03.393 00:08:03.393 [2024-04-24 00:21:56.934985] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:03.393 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.393 suites 1 1 n/a 0 0 00:08:03.393 tests 10 10 10 0 0 00:08:03.393 asserts 584 584 584 0 n/a 00:08:03.393 00:08:03.393 Elapsed time = 0.004 seconds 00:08:03.393 00:08:03.393 real 0m0.055s 00:08:03.393 user 0m0.020s 00:08:03.393 sys 0m0.034s 00:08:03.393 00:21:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:03.393 00:21:56 -- common/autotest_common.sh@10 -- # set +x 00:08:03.393 ************************************ 00:08:03.393 END TEST unittest_nvmf_rdma 00:08:03.393 ************************************ 00:08:03.393 00:21:57 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:03.393 00:21:57 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:08:03.393 00:21:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:03.393 00:21:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.393 00:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:03.393 ************************************ 00:08:03.393 START TEST unittest_scsi 00:08:03.393 ************************************ 00:08:03.393 00:21:57 -- common/autotest_common.sh@1111 -- # unittest_scsi 00:08:03.393 00:21:57 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:03.393 00:08:03.393 00:08:03.393 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.393 http://cunit.sourceforge.net/ 00:08:03.393 00:08:03.393 00:08:03.393 Suite: dev_suite 00:08:03.393 Test: dev_destruct_null_dev ...passed 00:08:03.393 Test: dev_destruct_zero_luns ...passed 00:08:03.393 Test: dev_destruct_null_lun ...passed 00:08:03.393 Test: dev_destruct_success ...passed 00:08:03.393 Test: dev_construct_num_luns_zero ...[2024-04-24 00:21:57.082193] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:03.393 passed 00:08:03.393 Test: dev_construct_no_lun_zero ...passed 00:08:03.393 Test: dev_construct_null_lun ...[2024-04-24 00:21:57.082562] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:03.393 passed 00:08:03.393 Test: dev_construct_name_too_long ...passed 00:08:03.393 Test: dev_construct_success ...passed 00:08:03.393 Test: dev_construct_success_lun_zero_not_first ...passed 00:08:03.393 Test: dev_queue_mgmt_task_success ...passed 00:08:03.393 Test: dev_queue_task_success ...passed 00:08:03.393 Test: dev_stop_success ...passed 00:08:03.393 Test: dev_add_port_max_ports ...[2024-04-24 00:21:57.082622] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:03.393 [2024-04-24 00:21:57.082704] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:03.393 [2024-04-24 00:21:57.083115] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:03.393 passed 00:08:03.393 Test: dev_add_port_construct_failure1 ...[2024-04-24 00:21:57.083232] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:03.393 passed 00:08:03.393 Test: dev_add_port_construct_failure2 ...[2024-04-24 00:21:57.083333] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:03.393 passed 00:08:03.393 Test: dev_add_port_success1 ...passed 00:08:03.393 Test: dev_add_port_success2 ...passed 00:08:03.393 Test: dev_add_port_success3 ...passed 00:08:03.393 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:03.393 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:03.393 Test: dev_find_port_by_id_success ...passed 00:08:03.393 Test: dev_add_lun_bdev_not_found ...passed 00:08:03.393 Test: dev_add_lun_no_free_lun_id ...[2024-04-24 00:21:57.083798] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:03.393 passed 00:08:03.393 Test: dev_add_lun_success1 ...passed 00:08:03.393 Test: dev_add_lun_success2 ...passed 00:08:03.393 Test: dev_check_pending_tasks ...passed 00:08:03.393 Test: dev_iterate_luns ...passed 00:08:03.393 Test: dev_find_free_lun ...passed 00:08:03.393 00:08:03.393 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.393 suites 1 1 n/a 0 0 00:08:03.393 tests 29 29 29 0 0 00:08:03.393 asserts 97 97 97 0 n/a 00:08:03.393 00:08:03.393 Elapsed time = 0.002 seconds 00:08:03.393 00:21:57 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:03.393 00:08:03.393 00:08:03.393 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.393 http://cunit.sourceforge.net/ 00:08:03.393 00:08:03.393 00:08:03.393 Suite: lun_suite 00:08:03.393 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-04-24 00:21:57.132196] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:03.393 passed 00:08:03.393 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-04-24 00:21:57.132600] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:03.393 passed 00:08:03.393 Test: lun_task_mgmt_execute_lun_reset ...passed 00:08:03.393 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:03.393 Test: lun_task_mgmt_execute_invalid_case ...passed 00:08:03.393 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...[2024-04-24 00:21:57.132791] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:03.393 passed 00:08:03.393 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:08:03.393 Test: lun_append_task_null_lun_not_supported ...passed 00:08:03.393 Test: lun_execute_scsi_task_pending ...passed 00:08:03.393 Test: lun_execute_scsi_task_complete ...passed 00:08:03.393 Test: lun_execute_scsi_task_resize ...passed 00:08:03.393 Test: lun_destruct_success ...passed 00:08:03.393 Test: lun_construct_null_ctx ...passed 00:08:03.393 Test: lun_construct_success ...passed 00:08:03.393 Test: lun_reset_task_wait_scsi_task_complete ...[2024-04-24 00:21:57.133055] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:03.393 passed 00:08:03.393 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:03.393 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:03.393 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:03.393 00:08:03.394 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.394 suites 1 1 n/a 0 0 00:08:03.394 tests 18 18 18 0 0 00:08:03.394 asserts 153 153 153 0 n/a 00:08:03.394 00:08:03.394 Elapsed time = 0.001 seconds 00:08:03.394 00:21:57 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:03.394 00:08:03.394 00:08:03.394 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.394 http://cunit.sourceforge.net/ 00:08:03.394 00:08:03.394 00:08:03.394 Suite: scsi_suite 00:08:03.394 Test: scsi_init ...passed 00:08:03.394 00:08:03.394 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.394 suites 1 1 n/a 0 0 00:08:03.394 tests 1 1 1 0 0 00:08:03.394 asserts 1 1 1 0 n/a 00:08:03.394 00:08:03.394 Elapsed time = 0.000 seconds 00:08:03.652 00:21:57 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:03.652 00:08:03.652 00:08:03.652 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.652 http://cunit.sourceforge.net/ 00:08:03.652 00:08:03.652 00:08:03.652 Suite: translation_suite 00:08:03.652 Test: mode_select_6_test ...passed 00:08:03.652 Test: mode_select_6_test2 ...passed 00:08:03.652 Test: mode_sense_6_test ...passed 00:08:03.652 Test: mode_sense_10_test ...passed 00:08:03.652 Test: inquiry_evpd_test ...passed 00:08:03.652 Test: inquiry_standard_test ...passed 00:08:03.652 Test: inquiry_overflow_test ...passed 00:08:03.652 Test: task_complete_test ...passed 00:08:03.652 Test: lba_range_test ...passed 00:08:03.652 Test: xfer_len_test ...[2024-04-24 00:21:57.210517] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:03.652 passed 00:08:03.652 Test: xfer_test ...passed 00:08:03.652 Test: scsi_name_padding_test ...passed 00:08:03.652 Test: get_dif_ctx_test ...passed 00:08:03.652 Test: unmap_split_test ...passed 00:08:03.652 00:08:03.652 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.652 suites 1 1 n/a 0 0 00:08:03.652 tests 14 14 14 0 0 00:08:03.652 asserts 1205 1205 1205 0 n/a 00:08:03.652 00:08:03.652 Elapsed time = 0.004 seconds 00:08:03.652 00:21:57 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:03.652 00:08:03.652 00:08:03.652 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.652 http://cunit.sourceforge.net/ 00:08:03.652 00:08:03.652 00:08:03.652 Suite: reservation_suite 00:08:03.652 Test: test_reservation_register ...[2024-04-24 00:21:57.253452] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:03.652 passed 00:08:03.652 Test: test_reservation_reserve ...[2024-04-24 00:21:57.253889] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:03.652 [2024-04-24 00:21:57.253983] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:03.652 passed 00:08:03.652 Test: test_reservation_preempt_non_all_regs ...[2024-04-24 00:21:57.254101] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:03.652 [2024-04-24 00:21:57.254175] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:03.652 passed 00:08:03.652 Test: test_reservation_preempt_all_regs ...[2024-04-24 00:21:57.254264] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:03.652 passed 00:08:03.652 Test: test_reservation_cmds_conflict ...[2024-04-24 00:21:57.254397] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:03.652 [2024-04-24 00:21:57.254539] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:03.652 [2024-04-24 00:21:57.254620] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:03.652 [2024-04-24 00:21:57.254715] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:03.652 [2024-04-24 00:21:57.254763] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:03.652 [2024-04-24 00:21:57.254816] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:03.652 [2024-04-24 00:21:57.254858] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:03.652 passed 00:08:03.652 Test: test_scsi2_reserve_release ...passed 00:08:03.652 Test: test_pr_with_scsi2_reserve_release ...[2024-04-24 00:21:57.255079] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:03.652 passed 00:08:03.652 00:08:03.652 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.652 suites 1 1 n/a 0 0 00:08:03.652 tests 7 7 7 0 0 00:08:03.652 asserts 257 257 257 0 n/a 00:08:03.652 00:08:03.652 Elapsed time = 0.002 seconds 00:08:03.652 00:08:03.652 real 0m0.213s 00:08:03.653 user 0m0.094s 00:08:03.653 sys 0m0.121s 00:08:03.653 00:21:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:03.653 00:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:03.653 ************************************ 00:08:03.653 END TEST unittest_scsi 00:08:03.653 ************************************ 00:08:03.653 00:21:57 -- unit/unittest.sh@276 -- # uname -s 00:08:03.653 00:21:57 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:08:03.653 00:21:57 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:08:03.653 00:21:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:03.653 00:21:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.653 00:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:03.653 ************************************ 00:08:03.653 START TEST unittest_sock 00:08:03.653 ************************************ 00:08:03.653 00:21:57 -- common/autotest_common.sh@1111 -- # unittest_sock 00:08:03.653 00:21:57 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:03.653 00:08:03.653 00:08:03.653 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.653 http://cunit.sourceforge.net/ 00:08:03.653 00:08:03.653 00:08:03.653 Suite: sock 00:08:03.653 Test: posix_sock ...passed 00:08:03.653 Test: ut_sock ...passed 00:08:03.653 Test: posix_sock_group ...passed 00:08:03.653 Test: ut_sock_group ...passed 00:08:03.653 Test: posix_sock_group_fairness ...passed 00:08:03.653 Test: _posix_sock_close ...passed 00:08:03.653 Test: sock_get_default_opts ...passed 00:08:03.653 Test: ut_sock_impl_get_set_opts ...passed 00:08:03.653 Test: posix_sock_impl_get_set_opts ...passed 00:08:03.653 Test: ut_sock_map ...passed 00:08:03.653 Test: override_impl_opts ...passed 00:08:03.653 Test: ut_sock_group_get_ctx ...passed 00:08:03.653 00:08:03.653 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.653 suites 1 1 n/a 0 0 00:08:03.653 tests 12 12 12 0 0 00:08:03.653 asserts 349 349 349 0 n/a 00:08:03.653 00:08:03.653 Elapsed time = 0.009 seconds 00:08:03.912 00:21:57 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:03.912 00:08:03.912 00:08:03.912 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.912 http://cunit.sourceforge.net/ 00:08:03.912 00:08:03.912 00:08:03.912 Suite: posix 00:08:03.912 Test: flush ...passed 00:08:03.912 00:08:03.912 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.912 suites 1 1 n/a 0 0 00:08:03.912 tests 1 1 1 0 0 00:08:03.912 asserts 28 28 28 0 n/a 00:08:03.912 00:08:03.912 Elapsed time = 0.000 seconds 00:08:03.912 00:21:57 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:03.912 00:08:03.912 real 0m0.115s 00:08:03.912 user 0m0.028s 00:08:03.912 sys 0m0.065s 00:08:03.912 00:21:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:03.912 00:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:03.912 ************************************ 00:08:03.912 END TEST unittest_sock 00:08:03.912 ************************************ 00:08:03.912 00:21:57 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:03.912 00:21:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:03.912 00:21:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.912 00:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:03.912 ************************************ 00:08:03.912 START TEST unittest_thread 00:08:03.912 ************************************ 00:08:03.912 00:21:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:03.912 00:08:03.912 00:08:03.912 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.912 http://cunit.sourceforge.net/ 00:08:03.912 00:08:03.912 00:08:03.912 Suite: io_channel 00:08:03.912 Test: thread_alloc ...passed 00:08:03.912 Test: thread_send_msg ...passed 00:08:03.912 Test: thread_poller ...passed 00:08:03.912 Test: poller_pause ...passed 00:08:03.912 Test: thread_for_each ...passed 00:08:03.912 Test: for_each_channel_remove ...passed 00:08:03.912 Test: for_each_channel_unreg ...[2024-04-24 00:21:57.630052] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffdba926210 already registered (old:0x613000000200 new:0x6130000003c0) 00:08:03.912 passed 00:08:03.912 Test: thread_name ...passed 00:08:03.912 Test: channel ...[2024-04-24 00:21:57.633396] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x563379b6fd20 00:08:03.912 passed 00:08:03.912 Test: channel_destroy_races ...passed 00:08:03.912 Test: thread_exit_test ...[2024-04-24 00:21:57.637579] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:08:03.912 passed 00:08:03.912 Test: thread_update_stats_test ...passed 00:08:03.912 Test: nested_channel ...passed 00:08:03.912 Test: device_unregister_and_thread_exit_race ...passed 00:08:03.912 Test: cache_closest_timed_poller ...passed 00:08:03.912 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:03.912 Test: io_device_lookup ...passed 00:08:03.912 Test: spdk_spin ...[2024-04-24 00:21:57.647012] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:03.912 [2024-04-24 00:21:57.647098] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffdba926200 00:08:03.912 [2024-04-24 00:21:57.647196] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:03.912 [2024-04-24 00:21:57.648797] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:03.912 [2024-04-24 00:21:57.648882] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffdba926200 00:08:03.912 [2024-04-24 00:21:57.648922] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:03.912 [2024-04-24 00:21:57.648961] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffdba926200 00:08:03.912 [2024-04-24 00:21:57.649000] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:03.912 [2024-04-24 00:21:57.649042] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffdba926200 00:08:03.912 [2024-04-24 00:21:57.649073] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:03.912 [2024-04-24 00:21:57.649131] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffdba926200 00:08:03.912 passed 00:08:03.912 Test: for_each_channel_and_thread_exit_race ...passed 00:08:03.912 Test: for_each_thread_and_thread_exit_race ...passed 00:08:03.912 00:08:03.912 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.912 suites 1 1 n/a 0 0 00:08:03.912 tests 20 20 20 0 0 00:08:03.912 asserts 409 409 409 0 n/a 00:08:03.912 00:08:03.912 Elapsed time = 0.042 seconds 00:08:03.912 ************************************ 00:08:03.912 END TEST unittest_thread 00:08:03.912 ************************************ 00:08:03.912 00:08:03.912 real 0m0.095s 00:08:03.912 user 0m0.060s 00:08:03.912 sys 0m0.036s 00:08:03.912 00:21:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:03.912 00:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:04.176 00:21:57 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:04.176 00:21:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:04.176 00:21:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.176 00:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:04.176 ************************************ 00:08:04.176 START TEST unittest_iobuf 00:08:04.176 ************************************ 00:08:04.176 00:21:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:04.176 00:08:04.176 00:08:04.176 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.176 http://cunit.sourceforge.net/ 00:08:04.176 00:08:04.176 00:08:04.176 Suite: io_channel 00:08:04.176 Test: iobuf ...passed 00:08:04.176 Test: iobuf_cache ...[2024-04-24 00:21:57.815413] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 311:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:04.176 [2024-04-24 00:21:57.815965] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:04.176 [2024-04-24 00:21:57.816323] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 323:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:04.176 [2024-04-24 00:21:57.816505] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 326:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:04.176 [2024-04-24 00:21:57.816756] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 311:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:04.176 [2024-04-24 00:21:57.816911] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:04.176 passed 00:08:04.176 00:08:04.176 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.176 suites 1 1 n/a 0 0 00:08:04.176 tests 2 2 2 0 0 00:08:04.176 asserts 107 107 107 0 n/a 00:08:04.176 00:08:04.176 Elapsed time = 0.007 seconds 00:08:04.176 00:08:04.176 real 0m0.051s 00:08:04.176 user 0m0.027s 00:08:04.176 sys 0m0.023s 00:08:04.176 00:21:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:04.176 00:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:04.176 ************************************ 00:08:04.176 END TEST unittest_iobuf 00:08:04.176 ************************************ 00:08:04.176 00:21:57 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:08:04.176 00:21:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:04.176 00:21:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.176 00:21:57 -- common/autotest_common.sh@10 -- # set +x 00:08:04.176 ************************************ 00:08:04.176 START TEST unittest_util 00:08:04.176 ************************************ 00:08:04.176 00:21:57 -- common/autotest_common.sh@1111 -- # unittest_util 00:08:04.176 00:21:57 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:04.176 00:08:04.176 00:08:04.176 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.176 http://cunit.sourceforge.net/ 00:08:04.176 00:08:04.176 00:08:04.176 Suite: base64 00:08:04.176 Test: test_base64_get_encoded_strlen ...passed 00:08:04.176 Test: test_base64_get_decoded_len ...passed 00:08:04.176 Test: test_base64_encode ...passed 00:08:04.176 Test: test_base64_decode ...passed 00:08:04.176 Test: test_base64_urlsafe_encode ...passed 00:08:04.176 Test: test_base64_urlsafe_decode ...passed 00:08:04.176 00:08:04.176 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.176 suites 1 1 n/a 0 0 00:08:04.176 tests 6 6 6 0 0 00:08:04.176 asserts 112 112 112 0 n/a 00:08:04.176 00:08:04.176 Elapsed time = 0.000 seconds 00:08:04.435 00:21:57 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:04.435 00:08:04.435 00:08:04.435 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.435 http://cunit.sourceforge.net/ 00:08:04.435 00:08:04.435 00:08:04.435 Suite: bit_array 00:08:04.435 Test: test_1bit ...passed 00:08:04.435 Test: test_64bit ...passed 00:08:04.435 Test: test_find ...passed 00:08:04.435 Test: test_resize ...passed 00:08:04.435 Test: test_errors ...passed 00:08:04.435 Test: test_count ...passed 00:08:04.435 Test: test_mask_store_load ...passed 00:08:04.435 Test: test_mask_clear ...passed 00:08:04.435 00:08:04.435 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.435 suites 1 1 n/a 0 0 00:08:04.435 tests 8 8 8 0 0 00:08:04.435 asserts 5075 5075 5075 0 n/a 00:08:04.435 00:08:04.435 Elapsed time = 0.002 seconds 00:08:04.435 00:21:58 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:04.435 00:08:04.435 00:08:04.435 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.435 http://cunit.sourceforge.net/ 00:08:04.435 00:08:04.435 00:08:04.435 Suite: cpuset 00:08:04.435 Test: test_cpuset ...passed 00:08:04.435 Test: test_cpuset_parse ...[2024-04-24 00:21:58.030804] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:04.435 [2024-04-24 00:21:58.031644] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:04.435 [2024-04-24 00:21:58.032045] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:04.435 [2024-04-24 00:21:58.032416] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:04.435 [2024-04-24 00:21:58.032695] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:04.435 [2024-04-24 00:21:58.032987] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:04.435 [2024-04-24 00:21:58.033329] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:04.435 [2024-04-24 00:21:58.033708] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:04.435 passed 00:08:04.435 Test: test_cpuset_fmt ...passed 00:08:04.435 00:08:04.435 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.435 suites 1 1 n/a 0 0 00:08:04.435 tests 3 3 3 0 0 00:08:04.435 asserts 65 65 65 0 n/a 00:08:04.435 00:08:04.435 Elapsed time = 0.003 seconds 00:08:04.435 00:21:58 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:04.435 00:08:04.435 00:08:04.435 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.435 http://cunit.sourceforge.net/ 00:08:04.435 00:08:04.435 00:08:04.435 Suite: crc16 00:08:04.435 Test: test_crc16_t10dif ...passed 00:08:04.435 Test: test_crc16_t10dif_seed ...passed 00:08:04.435 Test: test_crc16_t10dif_copy ...passed 00:08:04.435 00:08:04.435 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.435 suites 1 1 n/a 0 0 00:08:04.435 tests 3 3 3 0 0 00:08:04.435 asserts 5 5 5 0 n/a 00:08:04.435 00:08:04.435 Elapsed time = 0.000 seconds 00:08:04.435 00:21:58 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:04.435 00:08:04.435 00:08:04.435 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.435 http://cunit.sourceforge.net/ 00:08:04.435 00:08:04.435 00:08:04.435 Suite: crc32_ieee 00:08:04.435 Test: test_crc32_ieee ...passed 00:08:04.435 00:08:04.435 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.435 suites 1 1 n/a 0 0 00:08:04.435 tests 1 1 1 0 0 00:08:04.435 asserts 1 1 1 0 n/a 00:08:04.435 00:08:04.435 Elapsed time = 0.000 seconds 00:08:04.435 00:21:58 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:04.435 00:08:04.435 00:08:04.435 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.435 http://cunit.sourceforge.net/ 00:08:04.435 00:08:04.435 00:08:04.435 Suite: crc32c 00:08:04.435 Test: test_crc32c ...passed 00:08:04.435 Test: test_crc32c_nvme ...passed 00:08:04.435 00:08:04.435 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.435 suites 1 1 n/a 0 0 00:08:04.435 tests 2 2 2 0 0 00:08:04.435 asserts 16 16 16 0 n/a 00:08:04.435 00:08:04.435 Elapsed time = 0.000 seconds 00:08:04.435 00:21:58 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:04.435 00:08:04.435 00:08:04.435 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.435 http://cunit.sourceforge.net/ 00:08:04.435 00:08:04.435 00:08:04.435 Suite: crc64 00:08:04.435 Test: test_crc64_nvme ...passed 00:08:04.435 00:08:04.435 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.435 suites 1 1 n/a 0 0 00:08:04.435 tests 1 1 1 0 0 00:08:04.435 asserts 4 4 4 0 n/a 00:08:04.435 00:08:04.435 Elapsed time = 0.000 seconds 00:08:04.695 00:21:58 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:04.695 00:08:04.695 00:08:04.695 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.695 http://cunit.sourceforge.net/ 00:08:04.695 00:08:04.695 00:08:04.695 Suite: string 00:08:04.695 Test: test_parse_ip_addr ...passed 00:08:04.695 Test: test_str_chomp ...passed 00:08:04.695 Test: test_parse_capacity ...passed 00:08:04.695 Test: test_sprintf_append_realloc ...passed 00:08:04.695 Test: test_strtol ...passed 00:08:04.695 Test: test_strtoll ...passed 00:08:04.695 Test: test_strarray ...passed 00:08:04.695 Test: test_strcpy_replace ...passed 00:08:04.695 00:08:04.695 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.695 suites 1 1 n/a 0 0 00:08:04.695 tests 8 8 8 0 0 00:08:04.695 asserts 161 161 161 0 n/a 00:08:04.696 00:08:04.696 Elapsed time = 0.001 seconds 00:08:04.696 00:21:58 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:04.696 00:08:04.696 00:08:04.696 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.696 http://cunit.sourceforge.net/ 00:08:04.696 00:08:04.696 00:08:04.696 Suite: dif 00:08:04.696 Test: dif_generate_and_verify_test ...[2024-04-24 00:21:58.280473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:04.696 [2024-04-24 00:21:58.281311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:04.696 [2024-04-24 00:21:58.281860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:04.696 [2024-04-24 00:21:58.282454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:04.696 [2024-04-24 00:21:58.283143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:04.696 [2024-04-24 00:21:58.283695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:04.696 passed 00:08:04.696 Test: dif_disable_check_test ...[2024-04-24 00:21:58.285330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:04.696 [2024-04-24 00:21:58.285938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:04.696 [2024-04-24 00:21:58.286501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:04.696 passed 00:08:04.696 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-04-24 00:21:58.288290] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:04.696 [2024-04-24 00:21:58.288772] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:04.696 [2024-04-24 00:21:58.289237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:04.696 [2024-04-24 00:21:58.289739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:04.696 [2024-04-24 00:21:58.290193] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:04.696 [2024-04-24 00:21:58.290640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:04.696 [2024-04-24 00:21:58.291168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:04.696 [2024-04-24 00:21:58.291609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:04.696 [2024-04-24 00:21:58.292062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:04.696 [2024-04-24 00:21:58.292527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:04.696 [2024-04-24 00:21:58.292997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:04.696 passed 00:08:04.696 Test: dif_apptag_mask_test ...[2024-04-24 00:21:58.293609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:04.696 [2024-04-24 00:21:58.294150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:04.696 passed 00:08:04.696 Test: dif_sec_512_md_0_error_test ...[2024-04-24 00:21:58.294701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:04.696 passed 00:08:04.696 Test: dif_sec_4096_md_0_error_test ...[2024-04-24 00:21:58.295090] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:04.696 [2024-04-24 00:21:58.295210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:04.696 passed 00:08:04.696 Test: dif_sec_4100_md_128_error_test ...[2024-04-24 00:21:58.295478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:04.696 [2024-04-24 00:21:58.295688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:04.696 passed 00:08:04.696 Test: dif_guard_seed_test ...passed 00:08:04.696 Test: dif_guard_value_test ...passed 00:08:04.696 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:04.696 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:04.696 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:04.696 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:04.696 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:04.696 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:04.696 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:04.696 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:04.696 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:04.696 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:04.696 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:04.696 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:04.696 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:04.696 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:04.696 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:04.696 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:04.696 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:04.696 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:04.696 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-24 00:21:58.350454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=ff4c, Actual=fd4c 00:08:04.696 [2024-04-24 00:21:58.353690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fc21, Actual=fe21 00:08:04.696 [2024-04-24 00:21:58.356843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.696 [2024-04-24 00:21:58.359939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.696 [2024-04-24 00:21:58.363015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2000061 00:08:04.696 [2024-04-24 00:21:58.365678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2000061 00:08:04.696 [2024-04-24 00:21:58.368374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=7f99 00:08:04.696 [2024-04-24 00:21:58.371028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fe21, Actual=63cd 00:08:04.696 [2024-04-24 00:21:58.373662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=18b753ed, Actual=1ab753ed 00:08:04.696 [2024-04-24 00:21:58.376348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3a574660, Actual=38574660 00:08:04.696 [2024-04-24 00:21:58.379151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.696 [2024-04-24 00:21:58.381900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.696 [2024-04-24 00:21:58.384594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2000061 00:08:04.696 [2024-04-24 00:21:58.387289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2000061 00:08:04.696 [2024-04-24 00:21:58.389933] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=d0583767 00:08:04.696 [2024-04-24 00:21:58.392579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=38574660, Actual=a09ddc53 00:08:04.696 [2024-04-24 00:21:58.395279] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3 00:08:04.696 [2024-04-24 00:21:58.398323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d4a37a266, Actual=88010a2d4837a266 00:08:04.696 [2024-04-24 00:21:58.401537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.696 [2024-04-24 00:21:58.404705] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.696 [2024-04-24 00:21:58.407602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000000061 00:08:04.696 [2024-04-24 00:21:58.410247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000000061 00:08:04.696 [2024-04-24 00:21:58.412915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=dd0cbe3d6aff9120 00:08:04.696 [2024-04-24 00:21:58.415535] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=88010a2d4837a266, Actual=7b5ae5aa739ec823 00:08:04.696 passed 00:08:04.696 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-04-24 00:21:58.417393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:08:04.696 [2024-04-24 00:21:58.417804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:08:04.696 [2024-04-24 00:21:58.418206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.696 [2024-04-24 00:21:58.418629] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.696 [2024-04-24 00:21:58.419103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.696 [2024-04-24 00:21:58.419513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.696 [2024-04-24 00:21:58.419917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7f99 00:08:04.696 [2024-04-24 00:21:58.420320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=63cd 00:08:04.697 [2024-04-24 00:21:58.420748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:08:04.697 [2024-04-24 00:21:58.421156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:08:04.697 [2024-04-24 00:21:58.421597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.422002] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.422403] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.697 [2024-04-24 00:21:58.422839] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.697 [2024-04-24 00:21:58.423320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d0583767 00:08:04.697 [2024-04-24 00:21:58.423723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a09ddc53 00:08:04.697 [2024-04-24 00:21:58.424155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3 00:08:04.697 [2024-04-24 00:21:58.424564] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4a37a266, Actual=88010a2d4837a266 00:08:04.697 [2024-04-24 00:21:58.424974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.425368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.425773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058 00:08:04.697 [2024-04-24 00:21:58.426178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058 00:08:04.697 [2024-04-24 00:21:58.426602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=dd0cbe3d6aff9120 00:08:04.697 [2024-04-24 00:21:58.427037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7b5ae5aa739ec823 00:08:04.697 passed 00:08:04.697 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-04-24 00:21:58.427635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:08:04.697 [2024-04-24 00:21:58.428035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:08:04.697 [2024-04-24 00:21:58.428449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.428863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.429288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.697 [2024-04-24 00:21:58.429702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.697 [2024-04-24 00:21:58.430115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7f99 00:08:04.697 [2024-04-24 00:21:58.430518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=63cd 00:08:04.697 [2024-04-24 00:21:58.430944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:08:04.697 [2024-04-24 00:21:58.431363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:08:04.697 [2024-04-24 00:21:58.431772] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.432173] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.432581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.697 [2024-04-24 00:21:58.432992] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.697 [2024-04-24 00:21:58.433398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d0583767 00:08:04.697 [2024-04-24 00:21:58.433826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a09ddc53 00:08:04.697 [2024-04-24 00:21:58.434329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3 00:08:04.697 [2024-04-24 00:21:58.434802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4a37a266, Actual=88010a2d4837a266 00:08:04.697 [2024-04-24 00:21:58.435297] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.435773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.436238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058 00:08:04.697 [2024-04-24 00:21:58.436688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058 00:08:04.697 [2024-04-24 00:21:58.437171] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=dd0cbe3d6aff9120 00:08:04.697 [2024-04-24 00:21:58.437625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7b5ae5aa739ec823 00:08:04.697 passed 00:08:04.697 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-04-24 00:21:58.438379] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:08:04.697 [2024-04-24 00:21:58.438886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:08:04.697 [2024-04-24 00:21:58.439378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.439847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.440348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.697 [2024-04-24 00:21:58.440831] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.697 [2024-04-24 00:21:58.441335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7f99 00:08:04.697 [2024-04-24 00:21:58.441828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=63cd 00:08:04.697 [2024-04-24 00:21:58.442314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:08:04.697 [2024-04-24 00:21:58.442813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:08:04.697 [2024-04-24 00:21:58.443361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.443834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.444259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.697 [2024-04-24 00:21:58.444689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.697 [2024-04-24 00:21:58.445110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d0583767 00:08:04.697 [2024-04-24 00:21:58.445519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a09ddc53 00:08:04.697 [2024-04-24 00:21:58.445965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3 00:08:04.697 [2024-04-24 00:21:58.446422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4a37a266, Actual=88010a2d4837a266 00:08:04.697 [2024-04-24 00:21:58.446875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.447319] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.447733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058 00:08:04.697 [2024-04-24 00:21:58.448145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058 00:08:04.697 [2024-04-24 00:21:58.448571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=dd0cbe3d6aff9120 00:08:04.697 [2024-04-24 00:21:58.448987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7b5ae5aa739ec823 00:08:04.697 passed 00:08:04.697 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-04-24 00:21:58.449600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:08:04.697 [2024-04-24 00:21:58.450020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:08:04.697 [2024-04-24 00:21:58.450428] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.450856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.697 [2024-04-24 00:21:58.451321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.697 [2024-04-24 00:21:58.451754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.697 [2024-04-24 00:21:58.452218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7f99 00:08:04.697 [2024-04-24 00:21:58.452635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=63cd 00:08:04.697 passed 00:08:04.698 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-04-24 00:21:58.453263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:08:04.698 [2024-04-24 00:21:58.453669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:08:04.698 [2024-04-24 00:21:58.454109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.698 [2024-04-24 00:21:58.454527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.698 [2024-04-24 00:21:58.455054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.698 [2024-04-24 00:21:58.455515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.698 [2024-04-24 00:21:58.455951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d0583767 00:08:04.698 [2024-04-24 00:21:58.456373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a09ddc53 00:08:04.698 [2024-04-24 00:21:58.456879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3 00:08:04.698 [2024-04-24 00:21:58.457336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4a37a266, Actual=88010a2d4837a266 00:08:04.698 [2024-04-24 00:21:58.457762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.698 [2024-04-24 00:21:58.458182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.698 [2024-04-24 00:21:58.458602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058 00:08:04.698 [2024-04-24 00:21:58.459063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058 00:08:04.698 [2024-04-24 00:21:58.459513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=dd0cbe3d6aff9120 00:08:04.698 [2024-04-24 00:21:58.459922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7b5ae5aa739ec823 00:08:04.698 passed 00:08:04.698 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-04-24 00:21:58.460521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:08:04.698 [2024-04-24 00:21:58.460927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:08:04.698 [2024-04-24 00:21:58.461413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.698 [2024-04-24 00:21:58.461888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.698 [2024-04-24 00:21:58.462351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.698 [2024-04-24 00:21:58.462792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.698 [2024-04-24 00:21:58.463237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7f99 00:08:04.698 [2024-04-24 00:21:58.463658] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=63cd 00:08:04.698 passed 00:08:04.698 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-04-24 00:21:58.464274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:08:04.698 [2024-04-24 00:21:58.464697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:08:04.698 [2024-04-24 00:21:58.465157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.698 [2024-04-24 00:21:58.465597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.698 [2024-04-24 00:21:58.466053] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.698 [2024-04-24 00:21:58.466484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:08:04.698 [2024-04-24 00:21:58.466958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=d0583767 00:08:04.698 [2024-04-24 00:21:58.467435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a09ddc53 00:08:04.698 [2024-04-24 00:21:58.467958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3 00:08:04.698 [2024-04-24 00:21:58.468432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4a37a266, Actual=88010a2d4837a266 00:08:04.698 [2024-04-24 00:21:58.468878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.698 [2024-04-24 00:21:58.469307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:08:04.698 [2024-04-24 00:21:58.469763] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058 00:08:04.698 [2024-04-24 00:21:58.470191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=20000000058 00:08:04.698 [2024-04-24 00:21:58.470619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=dd0cbe3d6aff9120 00:08:04.698 [2024-04-24 00:21:58.471089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=7b5ae5aa739ec823 00:08:04.698 passed 00:08:04.698 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:04.698 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:04.698 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:04.958 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:04.958 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:04.958 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:04.958 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:04.958 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:04.958 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:04.958 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-24 00:21:58.517912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=ff4c, Actual=fd4c 00:08:04.958 [2024-04-24 00:21:58.519185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=eec5, Actual=ecc5 00:08:04.958 [2024-04-24 00:21:58.520436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.958 [2024-04-24 00:21:58.521656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.958 [2024-04-24 00:21:58.522900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2000061 00:08:04.958 [2024-04-24 00:21:58.524140] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2000061 00:08:04.958 [2024-04-24 00:21:58.525344] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=7f99 00:08:04.958 [2024-04-24 00:21:58.526561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=c6fb 00:08:04.958 [2024-04-24 00:21:58.527814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=18b753ed, Actual=1ab753ed 00:08:04.959 [2024-04-24 00:21:58.529047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=52bfa05f, Actual=50bfa05f 00:08:04.959 [2024-04-24 00:21:58.530278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.531560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.532785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2000061 00:08:04.959 [2024-04-24 00:21:58.534013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2000061 00:08:04.959 [2024-04-24 00:21:58.535261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=d0583767 00:08:04.959 [2024-04-24 00:21:58.536492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=9dc7020c 00:08:04.959 [2024-04-24 00:21:58.537718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3 00:08:04.959 [2024-04-24 00:21:58.539008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=565d326b64a3b323, Actual=565d326b66a3b323 00:08:04.959 [2024-04-24 00:21:58.540241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.541470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.542715] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000000061 00:08:04.959 [2024-04-24 00:21:58.543980] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000000061 00:08:04.959 [2024-04-24 00:21:58.545201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=dd0cbe3d6aff9120 00:08:04.959 [2024-04-24 00:21:58.546471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3cc902869290d092, Actual=cf92ed01a939bad7 00:08:04.959 passed 00:08:04.959 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-24 00:21:58.547148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ff4c, Actual=fd4c 00:08:04.959 [2024-04-24 00:21:58.547537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2712, Actual=2512 00:08:04.959 [2024-04-24 00:21:58.547921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.548302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.548717] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000059 00:08:04.959 [2024-04-24 00:21:58.549127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000059 00:08:04.959 [2024-04-24 00:21:58.549500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=7f99 00:08:04.959 [2024-04-24 00:21:58.549871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=f2c 00:08:04.959 [2024-04-24 00:21:58.550243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=18b753ed, Actual=1ab753ed 00:08:04.959 [2024-04-24 00:21:58.550615] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cc0f680d, Actual=ce0f680d 00:08:04.959 [2024-04-24 00:21:58.551042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.551446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.551846] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000059 00:08:04.959 [2024-04-24 00:21:58.552234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000059 00:08:04.959 [2024-04-24 00:21:58.552600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=d0583767 00:08:04.959 [2024-04-24 00:21:58.552977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=377ca5e 00:08:04.959 [2024-04-24 00:21:58.553381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3 00:08:04.959 [2024-04-24 00:21:58.553765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=22bcd086bf7756de, Actual=22bcd086bd7756de 00:08:04.959 [2024-04-24 00:21:58.554156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.554535] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.554970] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000000059 00:08:04.959 [2024-04-24 00:21:58.555351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000000059 00:08:04.959 [2024-04-24 00:21:58.555688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=dd0cbe3d6aff9120 00:08:04.959 [2024-04-24 00:21:58.556092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=bb730fec72ed5f2a 00:08:04.959 passed 00:08:04.959 Test: dix_sec_512_md_0_error ...[2024-04-24 00:21:58.556436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:04.959 passed 00:08:04.959 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:08:04.959 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:04.959 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:04.959 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:04.959 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:04.959 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:04.959 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:04.959 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:04.959 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:04.959 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-24 00:21:58.602698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=ff4c, Actual=fd4c 00:08:04.959 [2024-04-24 00:21:58.604009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=eec5, Actual=ecc5 00:08:04.959 [2024-04-24 00:21:58.605261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.606494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.607786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2000061 00:08:04.959 [2024-04-24 00:21:58.609023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2000061 00:08:04.959 [2024-04-24 00:21:58.610248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=fd4c, Actual=7f99 00:08:04.959 [2024-04-24 00:21:58.611529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=5b17, Actual=c6fb 00:08:04.959 [2024-04-24 00:21:58.612782] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=18b753ed, Actual=1ab753ed 00:08:04.959 [2024-04-24 00:21:58.614020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=52bfa05f, Actual=50bfa05f 00:08:04.959 [2024-04-24 00:21:58.615355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.616595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.959 [2024-04-24 00:21:58.617800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2000061 00:08:04.959 [2024-04-24 00:21:58.619060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=2000061 00:08:04.959 [2024-04-24 00:21:58.620285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=1ab753ed, Actual=d0583767 00:08:04.959 [2024-04-24 00:21:58.621518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=50d983f, Actual=9dc7020c 00:08:04.960 [2024-04-24 00:21:58.622797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3 00:08:04.960 [2024-04-24 00:21:58.624082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=565d326b64a3b323, Actual=565d326b66a3b323 00:08:04.960 [2024-04-24 00:21:58.625400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.960 [2024-04-24 00:21:58.626702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=97, Expected=88, Actual=288 00:08:04.960 [2024-04-24 00:21:58.628039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000000061 00:08:04.960 [2024-04-24 00:21:58.629333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=97, Expected=61, Actual=20000000061 00:08:04.960 [2024-04-24 00:21:58.630680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=a576a7728ecc20d3, Actual=dd0cbe3d6aff9120 00:08:04.960 [2024-04-24 00:21:58.632015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=97, Expected=3cc902869290d092, Actual=cf92ed01a939bad7 00:08:04.960 passed 00:08:04.960 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-24 00:21:58.632846] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ff4c, Actual=fd4c 00:08:04.960 [2024-04-24 00:21:58.633291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2712, Actual=2512 00:08:04.960 [2024-04-24 00:21:58.633756] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=288 00:08:04.960 [2024-04-24 00:21:58.634226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=288 00:08:04.960 [2024-04-24 00:21:58.634740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000059 00:08:04.960 [2024-04-24 00:21:58.635216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000059 00:08:04.960 [2024-04-24 00:21:58.635693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=7f99 00:08:04.960 [2024-04-24 00:21:58.636147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=f2c 00:08:04.960 [2024-04-24 00:21:58.636594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=18b753ed, Actual=1ab753ed 00:08:04.960 [2024-04-24 00:21:58.637024] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cc0f680d, Actual=ce0f680d 00:08:04.960 [2024-04-24 00:21:58.637481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=288 00:08:04.960 [2024-04-24 00:21:58.637915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=288 00:08:04.960 [2024-04-24 00:21:58.638349] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000059 00:08:04.960 [2024-04-24 00:21:58.638673] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2000059 00:08:04.960 [2024-04-24 00:21:58.638997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=d0583767 00:08:04.960 [2024-04-24 00:21:58.639328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=377ca5e 00:08:04.960 [2024-04-24 00:21:58.639677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ccc20d3, Actual=a576a7728ecc20d3 00:08:04.960 [2024-04-24 00:21:58.640151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=22bcd086bf7756de, Actual=22bcd086bd7756de 00:08:04.960 [2024-04-24 00:21:58.640594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=288 00:08:04.960 [2024-04-24 00:21:58.641038] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=288 00:08:04.960 [2024-04-24 00:21:58.641478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000000059 00:08:04.960 [2024-04-24 00:21:58.641928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=20000000059 00:08:04.960 [2024-04-24 00:21:58.642366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=dd0cbe3d6aff9120 00:08:04.960 [2024-04-24 00:21:58.642825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=bb730fec72ed5f2a 00:08:04.960 passed 00:08:04.960 Test: set_md_interleave_iovs_test ...passed 00:08:04.960 Test: set_md_interleave_iovs_split_test ...passed 00:08:04.960 Test: dif_generate_stream_pi_16_test ...passed 00:08:04.960 Test: dif_generate_stream_test ...passed 00:08:04.960 Test: set_md_interleave_iovs_alignment_test ...[2024-04-24 00:21:58.651780] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:04.960 passed 00:08:04.960 Test: dif_generate_split_test ...passed 00:08:04.960 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:04.960 Test: dif_verify_split_test ...passed 00:08:04.960 Test: dif_verify_stream_multi_segments_test ...passed 00:08:04.960 Test: update_crc32c_pi_16_test ...passed 00:08:04.960 Test: update_crc32c_test ...passed 00:08:04.960 Test: dif_update_crc32c_split_test ...passed 00:08:04.960 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:04.960 Test: get_range_with_md_test ...passed 00:08:04.960 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:04.960 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:04.960 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:04.960 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:04.960 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:04.960 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:04.960 Test: dif_generate_and_verify_unmap_test ...passed 00:08:04.960 00:08:04.960 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.960 suites 1 1 n/a 0 0 00:08:04.960 tests 79 79 79 0 0 00:08:04.960 asserts 3584 3584 3584 0 n/a 00:08:04.960 00:08:04.960 Elapsed time = 0.378 seconds 00:08:04.960 00:21:58 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:04.960 00:08:04.960 00:08:04.960 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.960 http://cunit.sourceforge.net/ 00:08:04.960 00:08:04.960 00:08:04.960 Suite: iov 00:08:04.960 Test: test_single_iov ...passed 00:08:04.960 Test: test_simple_iov ...passed 00:08:04.960 Test: test_complex_iov ...passed 00:08:04.960 Test: test_iovs_to_buf ...passed 00:08:04.960 Test: test_buf_to_iovs ...passed 00:08:04.960 Test: test_memset ...passed 00:08:04.960 Test: test_iov_one ...passed 00:08:04.960 Test: test_iov_xfer ...passed 00:08:04.960 00:08:04.960 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.960 suites 1 1 n/a 0 0 00:08:04.960 tests 8 8 8 0 0 00:08:04.960 asserts 156 156 156 0 n/a 00:08:04.960 00:08:04.960 Elapsed time = 0.000 seconds 00:08:05.219 00:21:58 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:05.219 00:08:05.219 00:08:05.219 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.219 http://cunit.sourceforge.net/ 00:08:05.219 00:08:05.219 00:08:05.219 Suite: math 00:08:05.219 Test: test_serial_number_arithmetic ...passed 00:08:05.219 Suite: erase 00:08:05.219 Test: test_memset_s ...passed 00:08:05.219 00:08:05.219 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.219 suites 2 2 n/a 0 0 00:08:05.219 tests 2 2 2 0 0 00:08:05.219 asserts 18 18 18 0 n/a 00:08:05.219 00:08:05.219 Elapsed time = 0.000 seconds 00:08:05.219 00:21:58 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:05.219 00:08:05.219 00:08:05.219 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.219 http://cunit.sourceforge.net/ 00:08:05.219 00:08:05.219 00:08:05.219 Suite: pipe 00:08:05.219 Test: test_create_destroy ...passed 00:08:05.219 Test: test_write_get_buffer ...passed 00:08:05.219 Test: test_write_advance ...passed 00:08:05.219 Test: test_read_get_buffer ...passed 00:08:05.219 Test: test_read_advance ...passed 00:08:05.219 Test: test_data ...passed 00:08:05.219 00:08:05.219 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.219 suites 1 1 n/a 0 0 00:08:05.219 tests 6 6 6 0 0 00:08:05.219 asserts 251 251 251 0 n/a 00:08:05.219 00:08:05.219 Elapsed time = 0.000 seconds 00:08:05.219 00:21:58 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:05.219 00:08:05.219 00:08:05.219 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.219 http://cunit.sourceforge.net/ 00:08:05.219 00:08:05.219 00:08:05.219 Suite: xor 00:08:05.219 Test: test_xor_gen ...passed 00:08:05.219 00:08:05.219 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.219 suites 1 1 n/a 0 0 00:08:05.219 tests 1 1 1 0 0 00:08:05.219 asserts 17 17 17 0 n/a 00:08:05.219 00:08:05.219 Elapsed time = 0.007 seconds 00:08:05.219 00:08:05.219 real 0m0.932s 00:08:05.219 user 0m0.620s 00:08:05.219 sys 0m0.256s 00:08:05.219 00:21:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.219 00:21:58 -- common/autotest_common.sh@10 -- # set +x 00:08:05.219 ************************************ 00:08:05.219 END TEST unittest_util 00:08:05.219 ************************************ 00:08:05.219 00:21:58 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:05.219 00:21:58 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:05.219 00:21:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.219 00:21:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.219 00:21:58 -- common/autotest_common.sh@10 -- # set +x 00:08:05.219 ************************************ 00:08:05.219 START TEST unittest_vhost 00:08:05.219 ************************************ 00:08:05.219 00:21:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:05.219 00:08:05.219 00:08:05.219 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.219 http://cunit.sourceforge.net/ 00:08:05.219 00:08:05.219 00:08:05.219 Suite: vhost_suite 00:08:05.220 Test: desc_to_iov_test ...[2024-04-24 00:21:59.002189] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:05.220 passed 00:08:05.477 Test: create_controller_test ...[2024-04-24 00:21:59.008306] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:05.477 [2024-04-24 00:21:59.008772] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:05.477 [2024-04-24 00:21:59.009085] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:05.477 [2024-04-24 00:21:59.009336] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:05.477 [2024-04-24 00:21:59.009576] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:05.477 [2024-04-24 00:21:59.009889] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1782:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-04-24 00:21:59.011392] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:05.477 passed 00:08:05.477 Test: session_find_by_vid_test ...passed 00:08:05.477 Test: remove_controller_test ...[2024-04-24 00:21:59.014617] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1867:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:05.477 passed 00:08:05.477 Test: vq_avail_ring_get_test ...passed 00:08:05.477 Test: vq_packed_ring_test ...passed 00:08:05.477 Test: vhost_blk_construct_test ...passed 00:08:05.477 00:08:05.477 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.477 suites 1 1 n/a 0 0 00:08:05.477 tests 7 7 7 0 0 00:08:05.477 asserts 147 147 147 0 n/a 00:08:05.477 00:08:05.477 Elapsed time = 0.016 seconds 00:08:05.477 00:08:05.477 real 0m0.067s 00:08:05.477 user 0m0.037s 00:08:05.477 sys 0m0.026s 00:08:05.477 00:21:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.477 00:21:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.477 ************************************ 00:08:05.477 END TEST unittest_vhost 00:08:05.477 ************************************ 00:08:05.477 00:21:59 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:05.477 00:21:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.477 00:21:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.477 00:21:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.477 ************************************ 00:08:05.477 START TEST unittest_dma 00:08:05.477 ************************************ 00:08:05.477 00:21:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:05.477 00:08:05.477 00:08:05.477 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.477 http://cunit.sourceforge.net/ 00:08:05.477 00:08:05.477 00:08:05.477 Suite: dma_suite 00:08:05.477 Test: test_dma ...[2024-04-24 00:21:59.151105] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:05.477 passed 00:08:05.477 00:08:05.477 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.477 suites 1 1 n/a 0 0 00:08:05.477 tests 1 1 1 0 0 00:08:05.477 asserts 54 54 54 0 n/a 00:08:05.477 00:08:05.477 Elapsed time = 0.001 seconds 00:08:05.477 00:08:05.477 real 0m0.036s 00:08:05.477 user 0m0.022s 00:08:05.477 sys 0m0.013s 00:08:05.477 00:21:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.477 00:21:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.477 ************************************ 00:08:05.477 END TEST unittest_dma 00:08:05.477 ************************************ 00:08:05.477 00:21:59 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:08:05.477 00:21:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.477 00:21:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.477 00:21:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.477 ************************************ 00:08:05.477 START TEST unittest_init 00:08:05.477 ************************************ 00:08:05.477 00:21:59 -- common/autotest_common.sh@1111 -- # unittest_init 00:08:05.477 00:21:59 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:05.736 00:08:05.736 00:08:05.736 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.736 http://cunit.sourceforge.net/ 00:08:05.736 00:08:05.736 00:08:05.736 Suite: subsystem_suite 00:08:05.736 Test: subsystem_sort_test_depends_on_single ...passed 00:08:05.736 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:05.736 Test: subsystem_sort_test_missing_dependency ...[2024-04-24 00:21:59.275363] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:05.736 [2024-04-24 00:21:59.275872] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:05.736 passed 00:08:05.736 00:08:05.736 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.736 suites 1 1 n/a 0 0 00:08:05.736 tests 3 3 3 0 0 00:08:05.736 asserts 20 20 20 0 n/a 00:08:05.736 00:08:05.736 Elapsed time = 0.001 seconds 00:08:05.736 00:08:05.736 real 0m0.040s 00:08:05.736 user 0m0.018s 00:08:05.736 sys 0m0.022s 00:08:05.736 00:21:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.736 00:21:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.736 ************************************ 00:08:05.736 END TEST unittest_init 00:08:05.736 ************************************ 00:08:05.736 00:21:59 -- unit/unittest.sh@288 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:05.736 00:21:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.736 00:21:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.736 00:21:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.736 ************************************ 00:08:05.736 START TEST unittest_keyring 00:08:05.736 ************************************ 00:08:05.736 00:21:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:05.736 00:08:05.736 00:08:05.736 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.736 http://cunit.sourceforge.net/ 00:08:05.736 00:08:05.736 00:08:05.736 Suite: keyring 00:08:05.736 Test: test_keyring_add_remove ...[2024-04-24 00:21:59.402170] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:08:05.736 [2024-04-24 00:21:59.402598] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:08:05.736 [2024-04-24 00:21:59.402759] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:08:05.736 passed 00:08:05.736 Test: test_keyring_get_put ...passed 00:08:05.736 00:08:05.736 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.736 suites 1 1 n/a 0 0 00:08:05.736 tests 2 2 2 0 0 00:08:05.736 asserts 44 44 44 0 n/a 00:08:05.736 00:08:05.736 Elapsed time = 0.001 seconds 00:08:05.736 00:08:05.736 real 0m0.041s 00:08:05.736 user 0m0.022s 00:08:05.736 sys 0m0.019s 00:08:05.736 00:21:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.736 00:21:59 -- common/autotest_common.sh@10 -- # set +x 00:08:05.736 ************************************ 00:08:05.736 END TEST unittest_keyring 00:08:05.736 ************************************ 00:08:05.736 00:21:59 -- unit/unittest.sh@290 -- # '[' yes = yes ']' 00:08:05.736 00:21:59 -- unit/unittest.sh@290 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:05.736 00:21:59 -- unit/unittest.sh@291 -- # hostname 00:08:05.736 00:21:59 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:05.994 geninfo: WARNING: invalid characters removed from testname! 00:08:38.174 00:22:31 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:42.375 00:22:36 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:45.656 00:22:39 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:48.977 00:22:42 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:51.550 00:22:45 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:54.829 00:22:48 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:57.357 00:22:50 -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:59.886 00:22:53 -- unit/unittest.sh@299 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:59.886 00:22:53 -- unit/unittest.sh@300 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:00.453 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:00.453 Found 316 entries. 00:09:00.453 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:09:00.453 Writing .css and .png files. 00:09:00.453 Generating output. 00:09:00.453 Processing file include/linux/virtio_ring.h 00:09:00.711 Processing file include/spdk/nvmf_transport.h 00:09:00.711 Processing file include/spdk/base64.h 00:09:00.711 Processing file include/spdk/nvme_spec.h 00:09:00.711 Processing file include/spdk/mmio.h 00:09:00.711 Processing file include/spdk/histogram_data.h 00:09:00.711 Processing file include/spdk/nvme.h 00:09:00.711 Processing file include/spdk/bdev_module.h 00:09:00.711 Processing file include/spdk/trace.h 00:09:00.711 Processing file include/spdk/endian.h 00:09:00.711 Processing file include/spdk/util.h 00:09:00.711 Processing file include/spdk/thread.h 00:09:00.711 Processing file include/spdk_internal/rdma.h 00:09:00.711 Processing file include/spdk_internal/utf.h 00:09:00.711 Processing file include/spdk_internal/virtio.h 00:09:00.711 Processing file include/spdk_internal/sgl.h 00:09:00.711 Processing file include/spdk_internal/sock.h 00:09:00.711 Processing file include/spdk_internal/nvme_tcp.h 00:09:00.968 Processing file lib/accel/accel_rpc.c 00:09:00.968 Processing file lib/accel/accel_sw.c 00:09:00.968 Processing file lib/accel/accel.c 00:09:01.227 Processing file lib/bdev/bdev_rpc.c 00:09:01.227 Processing file lib/bdev/bdev_zone.c 00:09:01.227 Processing file lib/bdev/scsi_nvme.c 00:09:01.227 Processing file lib/bdev/part.c 00:09:01.227 Processing file lib/bdev/bdev.c 00:09:01.487 Processing file lib/blob/request.c 00:09:01.487 Processing file lib/blob/blobstore.c 00:09:01.487 Processing file lib/blob/blob_bs_dev.c 00:09:01.487 Processing file lib/blob/blobstore.h 00:09:01.487 Processing file lib/blob/zeroes.c 00:09:01.487 Processing file lib/blobfs/blobfs.c 00:09:01.487 Processing file lib/blobfs/tree.c 00:09:01.487 Processing file lib/conf/conf.c 00:09:01.746 Processing file lib/dma/dma.c 00:09:02.004 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:09:02.004 Processing file lib/env_dpdk/env.c 00:09:02.004 Processing file lib/env_dpdk/pci_virtio.c 00:09:02.004 Processing file lib/env_dpdk/pci_dpdk.c 00:09:02.004 Processing file lib/env_dpdk/pci.c 00:09:02.004 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:09:02.004 Processing file lib/env_dpdk/pci_vmd.c 00:09:02.004 Processing file lib/env_dpdk/sigbus_handler.c 00:09:02.004 Processing file lib/env_dpdk/pci_idxd.c 00:09:02.004 Processing file lib/env_dpdk/pci_ioat.c 00:09:02.004 Processing file lib/env_dpdk/pci_event.c 00:09:02.004 Processing file lib/env_dpdk/threads.c 00:09:02.004 Processing file lib/env_dpdk/memory.c 00:09:02.004 Processing file lib/env_dpdk/init.c 00:09:02.004 Processing file lib/event/app.c 00:09:02.004 Processing file lib/event/scheduler_static.c 00:09:02.004 Processing file lib/event/reactor.c 00:09:02.004 Processing file lib/event/app_rpc.c 00:09:02.004 Processing file lib/event/log_rpc.c 00:09:02.571 Processing file lib/ftl/ftl_p2l.c 00:09:02.571 Processing file lib/ftl/ftl_debug.h 00:09:02.571 Processing file lib/ftl/ftl_trace.c 00:09:02.571 Processing file lib/ftl/ftl_rq.c 00:09:02.571 Processing file lib/ftl/ftl_l2p_cache.c 00:09:02.571 Processing file lib/ftl/ftl_core.h 00:09:02.571 Processing file lib/ftl/ftl_writer.h 00:09:02.571 Processing file lib/ftl/ftl_l2p_flat.c 00:09:02.571 Processing file lib/ftl/ftl_debug.c 00:09:02.571 Processing file lib/ftl/ftl_init.c 00:09:02.571 Processing file lib/ftl/ftl_nv_cache.h 00:09:02.571 Processing file lib/ftl/ftl_reloc.c 00:09:02.571 Processing file lib/ftl/ftl_band_ops.c 00:09:02.571 Processing file lib/ftl/ftl_band.h 00:09:02.571 Processing file lib/ftl/ftl_writer.c 00:09:02.571 Processing file lib/ftl/ftl_l2p.c 00:09:02.571 Processing file lib/ftl/ftl_io.h 00:09:02.571 Processing file lib/ftl/ftl_band.c 00:09:02.571 Processing file lib/ftl/ftl_layout.c 00:09:02.571 Processing file lib/ftl/ftl_sb.c 00:09:02.571 Processing file lib/ftl/ftl_nv_cache.c 00:09:02.571 Processing file lib/ftl/ftl_io.c 00:09:02.571 Processing file lib/ftl/ftl_nv_cache_io.h 00:09:02.571 Processing file lib/ftl/ftl_core.c 00:09:02.571 Processing file lib/ftl/base/ftl_base_dev.c 00:09:02.571 Processing file lib/ftl/base/ftl_base_bdev.c 00:09:02.829 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:09:02.829 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:09:02.829 Processing file lib/ftl/mngt/ftl_mngt.c 00:09:02.829 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:09:02.829 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:09:02.829 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:09:02.829 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:09:02.829 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:09:02.829 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:09:02.829 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:09:02.829 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:09:02.829 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:09:02.829 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:09:02.829 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:09:02.829 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:09:03.098 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:09:03.098 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:09:03.098 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:09:03.098 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:09:03.098 Processing file lib/ftl/utils/ftl_mempool.c 00:09:03.098 Processing file lib/ftl/utils/ftl_property.c 00:09:03.098 Processing file lib/ftl/utils/ftl_md.c 00:09:03.098 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:09:03.098 Processing file lib/ftl/utils/ftl_bitmap.c 00:09:03.098 Processing file lib/ftl/utils/ftl_conf.c 00:09:03.098 Processing file lib/ftl/utils/ftl_property.h 00:09:03.098 Processing file lib/ftl/utils/ftl_df.h 00:09:03.098 Processing file lib/ftl/utils/ftl_addr_utils.h 00:09:03.356 Processing file lib/idxd/idxd.c 00:09:03.356 Processing file lib/idxd/idxd_user.c 00:09:03.356 Processing file lib/idxd/idxd_internal.h 00:09:03.356 Processing file lib/init/rpc.c 00:09:03.356 Processing file lib/init/json_config.c 00:09:03.356 Processing file lib/init/subsystem.c 00:09:03.356 Processing file lib/init/subsystem_rpc.c 00:09:03.356 Processing file lib/ioat/ioat_internal.h 00:09:03.356 Processing file lib/ioat/ioat.c 00:09:03.921 Processing file lib/iscsi/tgt_node.c 00:09:03.922 Processing file lib/iscsi/iscsi.h 00:09:03.922 Processing file lib/iscsi/iscsi.c 00:09:03.922 Processing file lib/iscsi/task.h 00:09:03.922 Processing file lib/iscsi/iscsi_subsystem.c 00:09:03.922 Processing file lib/iscsi/md5.c 00:09:03.922 Processing file lib/iscsi/param.c 00:09:03.922 Processing file lib/iscsi/portal_grp.c 00:09:03.922 Processing file lib/iscsi/iscsi_rpc.c 00:09:03.922 Processing file lib/iscsi/conn.c 00:09:03.922 Processing file lib/iscsi/task.c 00:09:03.922 Processing file lib/iscsi/init_grp.c 00:09:03.922 Processing file lib/json/json_parse.c 00:09:03.922 Processing file lib/json/json_util.c 00:09:03.922 Processing file lib/json/json_write.c 00:09:04.180 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:09:04.180 Processing file lib/jsonrpc/jsonrpc_client.c 00:09:04.180 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:09:04.180 Processing file lib/jsonrpc/jsonrpc_server.c 00:09:04.180 Processing file lib/keyring/keyring.c 00:09:04.180 Processing file lib/keyring/keyring_rpc.c 00:09:04.180 Processing file lib/log/log_deprecated.c 00:09:04.180 Processing file lib/log/log.c 00:09:04.180 Processing file lib/log/log_flags.c 00:09:04.438 Processing file lib/lvol/lvol.c 00:09:04.438 Processing file lib/nbd/nbd.c 00:09:04.438 Processing file lib/nbd/nbd_rpc.c 00:09:04.438 Processing file lib/notify/notify_rpc.c 00:09:04.438 Processing file lib/notify/notify.c 00:09:05.378 Processing file lib/nvme/nvme_io_msg.c 00:09:05.378 Processing file lib/nvme/nvme_fabric.c 00:09:05.378 Processing file lib/nvme/nvme_pcie_internal.h 00:09:05.378 Processing file lib/nvme/nvme_rdma.c 00:09:05.378 Processing file lib/nvme/nvme_pcie.c 00:09:05.378 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:09:05.378 Processing file lib/nvme/nvme_discovery.c 00:09:05.378 Processing file lib/nvme/nvme_transport.c 00:09:05.378 Processing file lib/nvme/nvme_poll_group.c 00:09:05.378 Processing file lib/nvme/nvme_tcp.c 00:09:05.378 Processing file lib/nvme/nvme_ns_cmd.c 00:09:05.378 Processing file lib/nvme/nvme_auth.c 00:09:05.378 Processing file lib/nvme/nvme.c 00:09:05.378 Processing file lib/nvme/nvme_ns.c 00:09:05.378 Processing file lib/nvme/nvme_opal.c 00:09:05.378 Processing file lib/nvme/nvme_qpair.c 00:09:05.378 Processing file lib/nvme/nvme_ctrlr.c 00:09:05.378 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:09:05.378 Processing file lib/nvme/nvme_quirks.c 00:09:05.378 Processing file lib/nvme/nvme_zns.c 00:09:05.378 Processing file lib/nvme/nvme_cuse.c 00:09:05.378 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:09:05.378 Processing file lib/nvme/nvme_internal.h 00:09:05.378 Processing file lib/nvme/nvme_pcie_common.c 00:09:05.638 Processing file lib/nvmf/rdma.c 00:09:05.638 Processing file lib/nvmf/transport.c 00:09:05.638 Processing file lib/nvmf/tcp.c 00:09:05.638 Processing file lib/nvmf/subsystem.c 00:09:05.638 Processing file lib/nvmf/ctrlr_discovery.c 00:09:05.638 Processing file lib/nvmf/ctrlr.c 00:09:05.638 Processing file lib/nvmf/ctrlr_bdev.c 00:09:05.638 Processing file lib/nvmf/nvmf_internal.h 00:09:05.638 Processing file lib/nvmf/nvmf_rpc.c 00:09:05.638 Processing file lib/nvmf/nvmf.c 00:09:05.896 Processing file lib/rdma/common.c 00:09:05.896 Processing file lib/rdma/rdma_verbs.c 00:09:05.896 Processing file lib/rpc/rpc.c 00:09:06.154 Processing file lib/scsi/scsi_rpc.c 00:09:06.154 Processing file lib/scsi/scsi.c 00:09:06.154 Processing file lib/scsi/task.c 00:09:06.154 Processing file lib/scsi/scsi_pr.c 00:09:06.154 Processing file lib/scsi/dev.c 00:09:06.154 Processing file lib/scsi/scsi_bdev.c 00:09:06.154 Processing file lib/scsi/lun.c 00:09:06.154 Processing file lib/scsi/port.c 00:09:06.154 Processing file lib/sock/sock.c 00:09:06.154 Processing file lib/sock/sock_rpc.c 00:09:06.424 Processing file lib/thread/thread.c 00:09:06.424 Processing file lib/thread/iobuf.c 00:09:06.424 Processing file lib/trace/trace_rpc.c 00:09:06.424 Processing file lib/trace/trace.c 00:09:06.424 Processing file lib/trace/trace_flags.c 00:09:06.424 Processing file lib/trace_parser/trace.cpp 00:09:06.424 Processing file lib/ut/ut.c 00:09:06.682 Processing file lib/ut_mock/mock.c 00:09:06.940 Processing file lib/util/zipf.c 00:09:06.940 Processing file lib/util/dif.c 00:09:06.940 Processing file lib/util/crc32c.c 00:09:06.940 Processing file lib/util/hexlify.c 00:09:06.940 Processing file lib/util/string.c 00:09:06.940 Processing file lib/util/crc32_ieee.c 00:09:06.940 Processing file lib/util/iov.c 00:09:06.940 Processing file lib/util/fd.c 00:09:06.940 Processing file lib/util/cpuset.c 00:09:06.940 Processing file lib/util/crc32.c 00:09:06.940 Processing file lib/util/strerror_tls.c 00:09:06.940 Processing file lib/util/bit_array.c 00:09:06.940 Processing file lib/util/xor.c 00:09:06.940 Processing file lib/util/file.c 00:09:06.940 Processing file lib/util/pipe.c 00:09:06.940 Processing file lib/util/uuid.c 00:09:06.940 Processing file lib/util/fd_group.c 00:09:06.940 Processing file lib/util/crc16.c 00:09:06.940 Processing file lib/util/math.c 00:09:06.940 Processing file lib/util/crc64.c 00:09:06.940 Processing file lib/util/base64.c 00:09:06.940 Processing file lib/vfio_user/host/vfio_user.c 00:09:06.940 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:07.199 Processing file lib/vhost/rte_vhost_user.c 00:09:07.199 Processing file lib/vhost/vhost.c 00:09:07.199 Processing file lib/vhost/vhost_rpc.c 00:09:07.199 Processing file lib/vhost/vhost_scsi.c 00:09:07.199 Processing file lib/vhost/vhost_blk.c 00:09:07.199 Processing file lib/vhost/vhost_internal.h 00:09:07.457 Processing file lib/virtio/virtio_vhost_user.c 00:09:07.457 Processing file lib/virtio/virtio_pci.c 00:09:07.457 Processing file lib/virtio/virtio.c 00:09:07.457 Processing file lib/virtio/virtio_vfio_user.c 00:09:07.457 Processing file lib/vmd/led.c 00:09:07.457 Processing file lib/vmd/vmd.c 00:09:07.457 Processing file module/accel/dsa/accel_dsa.c 00:09:07.457 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:07.770 Processing file module/accel/error/accel_error_rpc.c 00:09:07.770 Processing file module/accel/error/accel_error.c 00:09:07.770 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:07.770 Processing file module/accel/iaa/accel_iaa.c 00:09:07.770 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:07.770 Processing file module/accel/ioat/accel_ioat.c 00:09:07.770 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:07.770 Processing file module/bdev/aio/bdev_aio.c 00:09:08.028 Processing file module/bdev/delay/vbdev_delay.c 00:09:08.028 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:08.028 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:08.028 Processing file module/bdev/error/vbdev_error.c 00:09:08.028 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:08.028 Processing file module/bdev/ftl/bdev_ftl.c 00:09:08.286 Processing file module/bdev/gpt/gpt.c 00:09:08.286 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:08.286 Processing file module/bdev/gpt/gpt.h 00:09:08.286 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:08.286 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:08.286 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:08.286 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:08.545 Processing file module/bdev/malloc/bdev_malloc.c 00:09:08.545 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:08.545 Processing file module/bdev/null/bdev_null_rpc.c 00:09:08.545 Processing file module/bdev/null/bdev_null.c 00:09:08.804 Processing file module/bdev/nvme/bdev_nvme.c 00:09:08.804 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:08.804 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:08.804 Processing file module/bdev/nvme/nvme_rpc.c 00:09:08.804 Processing file module/bdev/nvme/vbdev_opal.c 00:09:08.804 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:08.804 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:09.061 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:09.061 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:09.061 Processing file module/bdev/raid/bdev_raid.c 00:09:09.061 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:09.061 Processing file module/bdev/raid/raid5f.c 00:09:09.061 Processing file module/bdev/raid/raid0.c 00:09:09.061 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:09.061 Processing file module/bdev/raid/bdev_raid.h 00:09:09.061 Processing file module/bdev/raid/concat.c 00:09:09.061 Processing file module/bdev/raid/raid1.c 00:09:09.319 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:09.319 Processing file module/bdev/split/vbdev_split.c 00:09:09.319 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:09.319 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:09.319 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:09.319 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:09.319 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:09.577 Processing file module/blob/bdev/blob_bdev.c 00:09:09.577 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:09.577 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:09.577 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:09.835 Processing file module/event/subsystems/accel/accel.c 00:09:09.835 Processing file module/event/subsystems/bdev/bdev.c 00:09:09.835 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:09.835 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:09.835 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:10.093 Processing file module/event/subsystems/keyring/keyring.c 00:09:10.093 Processing file module/event/subsystems/nbd/nbd.c 00:09:10.093 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:10.093 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:10.093 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:10.351 Processing file module/event/subsystems/scsi/scsi.c 00:09:10.351 Processing file module/event/subsystems/sock/sock.c 00:09:10.351 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:10.351 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:10.609 Processing file module/event/subsystems/vmd/vmd.c 00:09:10.609 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:10.609 Processing file module/keyring/file/keyring.c 00:09:10.609 Processing file module/keyring/file/keyring_rpc.c 00:09:10.609 Processing file module/keyring/linux/keyring_rpc.c 00:09:10.609 Processing file module/keyring/linux/keyring.c 00:09:10.609 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:10.866 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:10.866 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:10.866 Processing file module/sock/sock_kernel.h 00:09:11.125 Processing file module/sock/posix/posix.c 00:09:11.125 Writing directory view page. 00:09:11.125 Overall coverage rate: 00:09:11.125 lines......: 38.9% (39958 of 102602 lines) 00:09:11.125 functions..: 42.7% (3652 of 8562 functions) 00:09:11.125 00:09:11.125 00:09:11.125 ===================== 00:09:11.125 All unit tests passed 00:09:11.125 ===================== 00:09:11.125 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:11.125 00:23:04 -- unit/unittest.sh@303 -- # set +x 00:09:11.125 00:09:11.125 00:09:11.125 ************************************ 00:09:11.125 END TEST unittest 00:09:11.125 ************************************ 00:09:11.125 00:09:11.125 real 3m41.726s 00:09:11.125 user 3m11.171s 00:09:11.125 sys 0m21.743s 00:09:11.125 00:23:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:11.125 00:23:04 -- common/autotest_common.sh@10 -- # set +x 00:09:11.125 00:23:04 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:11.125 00:23:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:11.125 00:23:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:11.125 00:23:04 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:11.125 00:23:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:11.125 00:23:04 -- common/autotest_common.sh@10 -- # set +x 00:09:11.125 00:23:04 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:11.125 00:23:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:11.125 00:23:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.125 00:23:04 -- common/autotest_common.sh@10 -- # set +x 00:09:11.125 ************************************ 00:09:11.125 START TEST env 00:09:11.125 ************************************ 00:09:11.125 00:23:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:11.382 * Looking for test storage... 00:09:11.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:11.383 00:23:04 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:11.383 00:23:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:11.383 00:23:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.383 00:23:04 -- common/autotest_common.sh@10 -- # set +x 00:09:11.383 ************************************ 00:09:11.383 START TEST env_memory 00:09:11.383 ************************************ 00:09:11.383 00:23:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:11.383 00:09:11.383 00:09:11.383 CUnit - A unit testing framework for C - Version 2.1-3 00:09:11.383 http://cunit.sourceforge.net/ 00:09:11.383 00:09:11.383 00:09:11.383 Suite: memory 00:09:11.383 Test: alloc and free memory map ...[2024-04-24 00:23:05.065541] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:11.383 passed 00:09:11.383 Test: mem map translation ...[2024-04-24 00:23:05.121325] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:11.383 [2024-04-24 00:23:05.121491] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:11.383 [2024-04-24 00:23:05.121623] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:11.383 [2024-04-24 00:23:05.121714] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:11.640 passed 00:09:11.640 Test: mem map registration ...[2024-04-24 00:23:05.216868] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:11.640 [2024-04-24 00:23:05.217044] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:11.640 passed 00:09:11.640 Test: mem map adjacent registrations ...passed 00:09:11.640 00:09:11.640 Run Summary: Type Total Ran Passed Failed Inactive 00:09:11.640 suites 1 1 n/a 0 0 00:09:11.640 tests 4 4 4 0 0 00:09:11.640 asserts 152 152 152 0 n/a 00:09:11.640 00:09:11.640 Elapsed time = 0.331 seconds 00:09:11.640 00:09:11.640 real 0m0.366s 00:09:11.640 user 0m0.346s 00:09:11.640 sys 0m0.020s 00:09:11.640 00:23:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:11.640 00:23:05 -- common/autotest_common.sh@10 -- # set +x 00:09:11.640 ************************************ 00:09:11.640 END TEST env_memory 00:09:11.640 ************************************ 00:09:11.640 00:23:05 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:11.640 00:23:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:11.640 00:23:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.640 00:23:05 -- common/autotest_common.sh@10 -- # set +x 00:09:11.898 ************************************ 00:09:11.898 START TEST env_vtophys 00:09:11.898 ************************************ 00:09:11.898 00:23:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:11.898 EAL: lib.eal log level changed from notice to debug 00:09:11.898 EAL: Detected lcore 0 as core 0 on socket 0 00:09:11.898 EAL: Detected lcore 1 as core 0 on socket 0 00:09:11.898 EAL: Detected lcore 2 as core 0 on socket 0 00:09:11.898 EAL: Detected lcore 3 as core 0 on socket 0 00:09:11.898 EAL: Detected lcore 4 as core 0 on socket 0 00:09:11.898 EAL: Detected lcore 5 as core 0 on socket 0 00:09:11.898 EAL: Detected lcore 6 as core 0 on socket 0 00:09:11.898 EAL: Detected lcore 7 as core 0 on socket 0 00:09:11.898 EAL: Detected lcore 8 as core 0 on socket 0 00:09:11.898 EAL: Detected lcore 9 as core 0 on socket 0 00:09:11.898 EAL: Maximum logical cores by configuration: 128 00:09:11.898 EAL: Detected CPU lcores: 10 00:09:11.898 EAL: Detected NUMA nodes: 1 00:09:11.898 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:09:11.898 EAL: Checking presence of .so 'librte_eal.so.24' 00:09:11.898 EAL: Checking presence of .so 'librte_eal.so' 00:09:11.898 EAL: Detected static linkage of DPDK 00:09:11.898 EAL: No shared files mode enabled, IPC will be disabled 00:09:11.898 EAL: Selected IOVA mode 'PA' 00:09:11.898 EAL: Probing VFIO support... 00:09:11.898 EAL: IOMMU type 1 (Type 1) is supported 00:09:11.898 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:11.898 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:11.898 EAL: VFIO support initialized 00:09:11.898 EAL: Ask a virtual area of 0x2e000 bytes 00:09:11.898 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:11.898 EAL: Setting up physically contiguous memory... 00:09:11.898 EAL: Setting maximum number of open files to 1048576 00:09:11.898 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:11.898 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:11.898 EAL: Ask a virtual area of 0x61000 bytes 00:09:11.898 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:11.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:11.898 EAL: Ask a virtual area of 0x400000000 bytes 00:09:11.898 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:11.898 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:11.898 EAL: Ask a virtual area of 0x61000 bytes 00:09:11.898 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:11.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:11.898 EAL: Ask a virtual area of 0x400000000 bytes 00:09:11.898 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:11.898 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:11.898 EAL: Ask a virtual area of 0x61000 bytes 00:09:11.898 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:11.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:11.898 EAL: Ask a virtual area of 0x400000000 bytes 00:09:11.898 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:11.898 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:11.898 EAL: Ask a virtual area of 0x61000 bytes 00:09:11.898 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:11.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:11.898 EAL: Ask a virtual area of 0x400000000 bytes 00:09:11.898 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:11.898 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:11.898 EAL: Hugepages will be freed exactly as allocated. 00:09:11.898 EAL: No shared files mode enabled, IPC is disabled 00:09:11.898 EAL: No shared files mode enabled, IPC is disabled 00:09:12.157 EAL: TSC frequency is ~2100000 KHz 00:09:12.157 EAL: Main lcore 0 is ready (tid=7fceef91ba80;cpuset=[0]) 00:09:12.157 EAL: Trying to obtain current memory policy. 00:09:12.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.157 EAL: Restoring previous memory policy: 0 00:09:12.157 EAL: request: mp_malloc_sync 00:09:12.157 EAL: No shared files mode enabled, IPC is disabled 00:09:12.157 EAL: Heap on socket 0 was expanded by 2MB 00:09:12.157 EAL: No shared files mode enabled, IPC is disabled 00:09:12.157 EAL: Mem event callback 'spdk:(nil)' registered 00:09:12.157 00:09:12.157 00:09:12.157 CUnit - A unit testing framework for C - Version 2.1-3 00:09:12.157 http://cunit.sourceforge.net/ 00:09:12.157 00:09:12.157 00:09:12.157 Suite: components_suite 00:09:12.723 Test: vtophys_malloc_test ...passed 00:09:12.723 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:12.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.723 EAL: Restoring previous memory policy: 0 00:09:12.723 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.723 EAL: request: mp_malloc_sync 00:09:12.723 EAL: No shared files mode enabled, IPC is disabled 00:09:12.723 EAL: Heap on socket 0 was expanded by 4MB 00:09:12.723 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.723 EAL: request: mp_malloc_sync 00:09:12.723 EAL: No shared files mode enabled, IPC is disabled 00:09:12.723 EAL: Heap on socket 0 was shrunk by 4MB 00:09:12.723 EAL: Trying to obtain current memory policy. 00:09:12.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.723 EAL: Restoring previous memory policy: 0 00:09:12.723 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.723 EAL: request: mp_malloc_sync 00:09:12.723 EAL: No shared files mode enabled, IPC is disabled 00:09:12.723 EAL: Heap on socket 0 was expanded by 6MB 00:09:12.723 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.723 EAL: request: mp_malloc_sync 00:09:12.723 EAL: No shared files mode enabled, IPC is disabled 00:09:12.723 EAL: Heap on socket 0 was shrunk by 6MB 00:09:12.723 EAL: Trying to obtain current memory policy. 00:09:12.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.723 EAL: Restoring previous memory policy: 0 00:09:12.723 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.723 EAL: request: mp_malloc_sync 00:09:12.723 EAL: No shared files mode enabled, IPC is disabled 00:09:12.723 EAL: Heap on socket 0 was expanded by 10MB 00:09:12.723 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.723 EAL: request: mp_malloc_sync 00:09:12.723 EAL: No shared files mode enabled, IPC is disabled 00:09:12.723 EAL: Heap on socket 0 was shrunk by 10MB 00:09:12.723 EAL: Trying to obtain current memory policy. 00:09:12.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.723 EAL: Restoring previous memory policy: 0 00:09:12.723 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.723 EAL: request: mp_malloc_sync 00:09:12.723 EAL: No shared files mode enabled, IPC is disabled 00:09:12.723 EAL: Heap on socket 0 was expanded by 18MB 00:09:12.723 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.723 EAL: request: mp_malloc_sync 00:09:12.723 EAL: No shared files mode enabled, IPC is disabled 00:09:12.723 EAL: Heap on socket 0 was shrunk by 18MB 00:09:12.723 EAL: Trying to obtain current memory policy. 00:09:12.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.723 EAL: Restoring previous memory policy: 0 00:09:12.723 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.723 EAL: request: mp_malloc_sync 00:09:12.723 EAL: No shared files mode enabled, IPC is disabled 00:09:12.723 EAL: Heap on socket 0 was expanded by 34MB 00:09:12.982 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.982 EAL: request: mp_malloc_sync 00:09:12.982 EAL: No shared files mode enabled, IPC is disabled 00:09:12.982 EAL: Heap on socket 0 was shrunk by 34MB 00:09:12.982 EAL: Trying to obtain current memory policy. 00:09:12.982 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:12.982 EAL: Restoring previous memory policy: 0 00:09:12.982 EAL: Calling mem event callback 'spdk:(nil)' 00:09:12.982 EAL: request: mp_malloc_sync 00:09:12.982 EAL: No shared files mode enabled, IPC is disabled 00:09:12.982 EAL: Heap on socket 0 was expanded by 66MB 00:09:13.240 EAL: Calling mem event callback 'spdk:(nil)' 00:09:13.240 EAL: request: mp_malloc_sync 00:09:13.240 EAL: No shared files mode enabled, IPC is disabled 00:09:13.240 EAL: Heap on socket 0 was shrunk by 66MB 00:09:13.240 EAL: Trying to obtain current memory policy. 00:09:13.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:13.240 EAL: Restoring previous memory policy: 0 00:09:13.240 EAL: Calling mem event callback 'spdk:(nil)' 00:09:13.240 EAL: request: mp_malloc_sync 00:09:13.240 EAL: No shared files mode enabled, IPC is disabled 00:09:13.240 EAL: Heap on socket 0 was expanded by 130MB 00:09:13.498 EAL: Calling mem event callback 'spdk:(nil)' 00:09:13.498 EAL: request: mp_malloc_sync 00:09:13.498 EAL: No shared files mode enabled, IPC is disabled 00:09:13.498 EAL: Heap on socket 0 was shrunk by 130MB 00:09:13.757 EAL: Trying to obtain current memory policy. 00:09:13.757 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:13.757 EAL: Restoring previous memory policy: 0 00:09:13.757 EAL: Calling mem event callback 'spdk:(nil)' 00:09:13.757 EAL: request: mp_malloc_sync 00:09:13.757 EAL: No shared files mode enabled, IPC is disabled 00:09:13.757 EAL: Heap on socket 0 was expanded by 258MB 00:09:14.324 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.324 EAL: request: mp_malloc_sync 00:09:14.324 EAL: No shared files mode enabled, IPC is disabled 00:09:14.324 EAL: Heap on socket 0 was shrunk by 258MB 00:09:14.941 EAL: Trying to obtain current memory policy. 00:09:14.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:14.941 EAL: Restoring previous memory policy: 0 00:09:14.941 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.941 EAL: request: mp_malloc_sync 00:09:14.941 EAL: No shared files mode enabled, IPC is disabled 00:09:14.941 EAL: Heap on socket 0 was expanded by 514MB 00:09:16.315 EAL: Calling mem event callback 'spdk:(nil)' 00:09:16.315 EAL: request: mp_malloc_sync 00:09:16.315 EAL: No shared files mode enabled, IPC is disabled 00:09:16.315 EAL: Heap on socket 0 was shrunk by 514MB 00:09:17.258 EAL: Trying to obtain current memory policy. 00:09:17.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:17.258 EAL: Restoring previous memory policy: 0 00:09:17.258 EAL: Calling mem event callback 'spdk:(nil)' 00:09:17.258 EAL: request: mp_malloc_sync 00:09:17.258 EAL: No shared files mode enabled, IPC is disabled 00:09:17.258 EAL: Heap on socket 0 was expanded by 1026MB 00:09:19.783 EAL: Calling mem event callback 'spdk:(nil)' 00:09:19.783 EAL: request: mp_malloc_sync 00:09:19.783 EAL: No shared files mode enabled, IPC is disabled 00:09:19.783 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:21.685 passed 00:09:21.685 00:09:21.685 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.685 suites 1 1 n/a 0 0 00:09:21.685 tests 2 2 2 0 0 00:09:21.685 asserts 6335 6335 6335 0 n/a 00:09:21.685 00:09:21.685 Elapsed time = 9.228 seconds 00:09:21.685 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.685 EAL: request: mp_malloc_sync 00:09:21.685 EAL: No shared files mode enabled, IPC is disabled 00:09:21.685 EAL: Heap on socket 0 was shrunk by 2MB 00:09:21.685 EAL: No shared files mode enabled, IPC is disabled 00:09:21.685 EAL: No shared files mode enabled, IPC is disabled 00:09:21.685 EAL: No shared files mode enabled, IPC is disabled 00:09:21.685 ************************************ 00:09:21.685 END TEST env_vtophys 00:09:21.685 ************************************ 00:09:21.685 00:09:21.685 real 0m9.588s 00:09:21.685 user 0m8.397s 00:09:21.685 sys 0m1.034s 00:09:21.685 00:23:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:21.685 00:23:15 -- common/autotest_common.sh@10 -- # set +x 00:09:21.685 00:23:15 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:21.685 00:23:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.685 00:23:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.685 00:23:15 -- common/autotest_common.sh@10 -- # set +x 00:09:21.685 ************************************ 00:09:21.685 START TEST env_pci 00:09:21.685 ************************************ 00:09:21.685 00:23:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:21.685 00:09:21.685 00:09:21.685 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.685 http://cunit.sourceforge.net/ 00:09:21.685 00:09:21.685 00:09:21.685 Suite: pci 00:09:21.685 Test: pci_hook ...[2024-04-24 00:23:15.215942] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 110209 has claimed it 00:09:21.685 passed 00:09:21.685 00:09:21.685 EAL: Cannot find device (10000:00:01.0) 00:09:21.685 EAL: Failed to attach device on primary process 00:09:21.685 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.685 suites 1 1 n/a 0 0 00:09:21.685 tests 1 1 1 0 0 00:09:21.685 asserts 25 25 25 0 n/a 00:09:21.685 00:09:21.685 Elapsed time = 0.007 seconds 00:09:21.685 00:09:21.685 real 0m0.121s 00:09:21.685 user 0m0.062s 00:09:21.685 sys 0m0.060s 00:09:21.685 00:23:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:21.685 00:23:15 -- common/autotest_common.sh@10 -- # set +x 00:09:21.685 ************************************ 00:09:21.685 END TEST env_pci 00:09:21.685 ************************************ 00:09:21.685 00:23:15 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:21.685 00:23:15 -- env/env.sh@15 -- # uname 00:09:21.685 00:23:15 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:21.685 00:23:15 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:21.685 00:23:15 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:21.685 00:23:15 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:09:21.685 00:23:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.685 00:23:15 -- common/autotest_common.sh@10 -- # set +x 00:09:21.685 ************************************ 00:09:21.685 START TEST env_dpdk_post_init 00:09:21.685 ************************************ 00:09:21.685 00:23:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:21.943 EAL: Detected CPU lcores: 10 00:09:21.943 EAL: Detected NUMA nodes: 1 00:09:21.944 EAL: Detected static linkage of DPDK 00:09:21.944 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:21.944 EAL: Selected IOVA mode 'PA' 00:09:21.944 EAL: VFIO support initialized 00:09:21.944 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:21.944 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:21.944 Starting DPDK initialization... 00:09:21.944 Starting SPDK post initialization... 00:09:21.944 SPDK NVMe probe 00:09:21.944 Attaching to 0000:00:10.0 00:09:21.944 Attached to 0000:00:10.0 00:09:21.944 Cleaning up... 00:09:21.944 00:09:21.944 real 0m0.300s 00:09:21.944 user 0m0.104s 00:09:21.944 sys 0m0.098s 00:09:21.944 00:23:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:21.944 00:23:15 -- common/autotest_common.sh@10 -- # set +x 00:09:21.944 ************************************ 00:09:21.944 END TEST env_dpdk_post_init 00:09:21.944 ************************************ 00:09:22.201 00:23:15 -- env/env.sh@26 -- # uname 00:09:22.201 00:23:15 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:22.201 00:23:15 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:22.201 00:23:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.201 00:23:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.201 00:23:15 -- common/autotest_common.sh@10 -- # set +x 00:09:22.201 ************************************ 00:09:22.201 START TEST env_mem_callbacks 00:09:22.201 ************************************ 00:09:22.201 00:23:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:22.201 EAL: Detected CPU lcores: 10 00:09:22.201 EAL: Detected NUMA nodes: 1 00:09:22.201 EAL: Detected static linkage of DPDK 00:09:22.201 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:22.201 EAL: Selected IOVA mode 'PA' 00:09:22.201 EAL: VFIO support initialized 00:09:22.459 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:22.459 00:09:22.459 00:09:22.459 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.459 http://cunit.sourceforge.net/ 00:09:22.459 00:09:22.459 00:09:22.459 Suite: memory 00:09:22.459 Test: test ... 00:09:22.459 register 0x200000200000 2097152 00:09:22.459 malloc 3145728 00:09:22.459 register 0x200000400000 4194304 00:09:22.459 buf 0x2000004fffc0 len 3145728 PASSED 00:09:22.459 malloc 64 00:09:22.459 buf 0x2000004ffec0 len 64 PASSED 00:09:22.459 malloc 4194304 00:09:22.459 register 0x200000800000 6291456 00:09:22.459 buf 0x2000009fffc0 len 4194304 PASSED 00:09:22.459 free 0x2000004fffc0 3145728 00:09:22.459 free 0x2000004ffec0 64 00:09:22.459 unregister 0x200000400000 4194304 PASSED 00:09:22.459 free 0x2000009fffc0 4194304 00:09:22.459 unregister 0x200000800000 6291456 PASSED 00:09:22.459 malloc 8388608 00:09:22.459 register 0x200000400000 10485760 00:09:22.459 buf 0x2000005fffc0 len 8388608 PASSED 00:09:22.459 free 0x2000005fffc0 8388608 00:09:22.459 unregister 0x200000400000 10485760 PASSED 00:09:22.459 passed 00:09:22.459 00:09:22.459 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.459 suites 1 1 n/a 0 0 00:09:22.459 tests 1 1 1 0 0 00:09:22.459 asserts 15 15 15 0 n/a 00:09:22.459 00:09:22.459 Elapsed time = 0.112 seconds 00:09:22.459 ************************************ 00:09:22.459 END TEST env_mem_callbacks 00:09:22.459 ************************************ 00:09:22.459 00:09:22.459 real 0m0.362s 00:09:22.459 user 0m0.174s 00:09:22.459 sys 0m0.088s 00:09:22.459 00:23:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:22.459 00:23:16 -- common/autotest_common.sh@10 -- # set +x 00:09:22.459 00:09:22.459 real 0m11.362s 00:09:22.459 user 0m9.385s 00:09:22.459 sys 0m1.632s 00:09:22.459 00:23:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:22.459 00:23:16 -- common/autotest_common.sh@10 -- # set +x 00:09:22.459 ************************************ 00:09:22.459 END TEST env 00:09:22.459 ************************************ 00:09:22.718 00:23:16 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:22.718 00:23:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.718 00:23:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.718 00:23:16 -- common/autotest_common.sh@10 -- # set +x 00:09:22.718 ************************************ 00:09:22.718 START TEST rpc 00:09:22.718 ************************************ 00:09:22.718 00:23:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:22.718 * Looking for test storage... 00:09:22.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:22.718 00:23:16 -- rpc/rpc.sh@65 -- # spdk_pid=110354 00:09:22.718 00:23:16 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:22.718 00:23:16 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:22.718 00:23:16 -- rpc/rpc.sh@67 -- # waitforlisten 110354 00:09:22.718 00:23:16 -- common/autotest_common.sh@817 -- # '[' -z 110354 ']' 00:09:22.718 00:23:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.718 00:23:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:22.718 00:23:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.718 00:23:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:22.718 00:23:16 -- common/autotest_common.sh@10 -- # set +x 00:09:22.975 [2024-04-24 00:23:16.516858] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:09:22.976 [2024-04-24 00:23:16.517068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110354 ] 00:09:22.976 [2024-04-24 00:23:16.691638] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.233 [2024-04-24 00:23:16.932871] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:23.233 [2024-04-24 00:23:16.933150] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 110354' to capture a snapshot of events at runtime. 00:09:23.233 [2024-04-24 00:23:16.933281] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.233 [2024-04-24 00:23:16.933388] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.233 [2024-04-24 00:23:16.933525] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid110354 for offline analysis/debug. 00:09:23.233 [2024-04-24 00:23:16.933692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.606 00:23:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:24.606 00:23:17 -- common/autotest_common.sh@850 -- # return 0 00:09:24.606 00:23:17 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:24.606 00:23:17 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:24.606 00:23:17 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:24.606 00:23:17 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:24.606 00:23:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:24.606 00:23:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.606 00:23:17 -- common/autotest_common.sh@10 -- # set +x 00:09:24.606 ************************************ 00:09:24.606 START TEST rpc_integrity 00:09:24.606 ************************************ 00:09:24.606 00:23:18 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:09:24.606 00:23:18 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:24.606 00:23:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.606 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.606 00:23:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.606 00:23:18 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:24.606 00:23:18 -- rpc/rpc.sh@13 -- # jq length 00:09:24.606 00:23:18 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:24.606 00:23:18 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:24.607 00:23:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.607 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.607 00:23:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.607 00:23:18 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:24.607 00:23:18 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:24.607 00:23:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.607 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.607 00:23:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.607 00:23:18 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:24.607 { 00:09:24.607 "name": "Malloc0", 00:09:24.607 "aliases": [ 00:09:24.607 "616374c6-0fee-405d-8833-2792cf277fa3" 00:09:24.607 ], 00:09:24.607 "product_name": "Malloc disk", 00:09:24.607 "block_size": 512, 00:09:24.607 "num_blocks": 16384, 00:09:24.607 "uuid": "616374c6-0fee-405d-8833-2792cf277fa3", 00:09:24.607 "assigned_rate_limits": { 00:09:24.607 "rw_ios_per_sec": 0, 00:09:24.607 "rw_mbytes_per_sec": 0, 00:09:24.607 "r_mbytes_per_sec": 0, 00:09:24.607 "w_mbytes_per_sec": 0 00:09:24.607 }, 00:09:24.607 "claimed": false, 00:09:24.607 "zoned": false, 00:09:24.607 "supported_io_types": { 00:09:24.607 "read": true, 00:09:24.607 "write": true, 00:09:24.607 "unmap": true, 00:09:24.607 "write_zeroes": true, 00:09:24.607 "flush": true, 00:09:24.607 "reset": true, 00:09:24.607 "compare": false, 00:09:24.607 "compare_and_write": false, 00:09:24.607 "abort": true, 00:09:24.607 "nvme_admin": false, 00:09:24.607 "nvme_io": false 00:09:24.607 }, 00:09:24.607 "memory_domains": [ 00:09:24.607 { 00:09:24.607 "dma_device_id": "system", 00:09:24.607 "dma_device_type": 1 00:09:24.607 }, 00:09:24.607 { 00:09:24.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.607 "dma_device_type": 2 00:09:24.607 } 00:09:24.607 ], 00:09:24.607 "driver_specific": {} 00:09:24.607 } 00:09:24.607 ]' 00:09:24.607 00:23:18 -- rpc/rpc.sh@17 -- # jq length 00:09:24.607 00:23:18 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:24.607 00:23:18 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:24.607 00:23:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.607 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.607 [2024-04-24 00:23:18.194725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:24.607 [2024-04-24 00:23:18.195307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.607 [2024-04-24 00:23:18.195453] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:09:24.607 [2024-04-24 00:23:18.195585] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.607 [2024-04-24 00:23:18.198355] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.607 [2024-04-24 00:23:18.198535] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:24.607 Passthru0 00:09:24.607 00:23:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.607 00:23:18 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:24.607 00:23:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.607 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.607 00:23:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.607 00:23:18 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:24.607 { 00:09:24.607 "name": "Malloc0", 00:09:24.607 "aliases": [ 00:09:24.607 "616374c6-0fee-405d-8833-2792cf277fa3" 00:09:24.607 ], 00:09:24.607 "product_name": "Malloc disk", 00:09:24.607 "block_size": 512, 00:09:24.607 "num_blocks": 16384, 00:09:24.607 "uuid": "616374c6-0fee-405d-8833-2792cf277fa3", 00:09:24.607 "assigned_rate_limits": { 00:09:24.607 "rw_ios_per_sec": 0, 00:09:24.607 "rw_mbytes_per_sec": 0, 00:09:24.607 "r_mbytes_per_sec": 0, 00:09:24.607 "w_mbytes_per_sec": 0 00:09:24.607 }, 00:09:24.607 "claimed": true, 00:09:24.607 "claim_type": "exclusive_write", 00:09:24.607 "zoned": false, 00:09:24.607 "supported_io_types": { 00:09:24.607 "read": true, 00:09:24.607 "write": true, 00:09:24.607 "unmap": true, 00:09:24.607 "write_zeroes": true, 00:09:24.607 "flush": true, 00:09:24.607 "reset": true, 00:09:24.607 "compare": false, 00:09:24.607 "compare_and_write": false, 00:09:24.607 "abort": true, 00:09:24.607 "nvme_admin": false, 00:09:24.607 "nvme_io": false 00:09:24.607 }, 00:09:24.607 "memory_domains": [ 00:09:24.607 { 00:09:24.607 "dma_device_id": "system", 00:09:24.607 "dma_device_type": 1 00:09:24.607 }, 00:09:24.607 { 00:09:24.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.607 "dma_device_type": 2 00:09:24.607 } 00:09:24.607 ], 00:09:24.607 "driver_specific": {} 00:09:24.607 }, 00:09:24.607 { 00:09:24.607 "name": "Passthru0", 00:09:24.607 "aliases": [ 00:09:24.607 "2e39191c-8b57-5656-815e-dfeb4a7fa7aa" 00:09:24.607 ], 00:09:24.607 "product_name": "passthru", 00:09:24.607 "block_size": 512, 00:09:24.607 "num_blocks": 16384, 00:09:24.607 "uuid": "2e39191c-8b57-5656-815e-dfeb4a7fa7aa", 00:09:24.607 "assigned_rate_limits": { 00:09:24.607 "rw_ios_per_sec": 0, 00:09:24.607 "rw_mbytes_per_sec": 0, 00:09:24.607 "r_mbytes_per_sec": 0, 00:09:24.607 "w_mbytes_per_sec": 0 00:09:24.607 }, 00:09:24.607 "claimed": false, 00:09:24.607 "zoned": false, 00:09:24.607 "supported_io_types": { 00:09:24.607 "read": true, 00:09:24.607 "write": true, 00:09:24.607 "unmap": true, 00:09:24.607 "write_zeroes": true, 00:09:24.607 "flush": true, 00:09:24.607 "reset": true, 00:09:24.607 "compare": false, 00:09:24.607 "compare_and_write": false, 00:09:24.607 "abort": true, 00:09:24.607 "nvme_admin": false, 00:09:24.607 "nvme_io": false 00:09:24.607 }, 00:09:24.607 "memory_domains": [ 00:09:24.607 { 00:09:24.607 "dma_device_id": "system", 00:09:24.607 "dma_device_type": 1 00:09:24.607 }, 00:09:24.607 { 00:09:24.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.607 "dma_device_type": 2 00:09:24.607 } 00:09:24.607 ], 00:09:24.607 "driver_specific": { 00:09:24.607 "passthru": { 00:09:24.607 "name": "Passthru0", 00:09:24.607 "base_bdev_name": "Malloc0" 00:09:24.607 } 00:09:24.607 } 00:09:24.607 } 00:09:24.607 ]' 00:09:24.607 00:23:18 -- rpc/rpc.sh@21 -- # jq length 00:09:24.607 00:23:18 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:24.607 00:23:18 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:24.607 00:23:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.607 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.607 00:23:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.607 00:23:18 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:24.607 00:23:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.607 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.607 00:23:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.607 00:23:18 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:24.607 00:23:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.607 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.607 00:23:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.607 00:23:18 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:24.607 00:23:18 -- rpc/rpc.sh@26 -- # jq length 00:09:24.607 00:23:18 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:24.607 00:09:24.607 real 0m0.341s 00:09:24.607 user 0m0.196s 00:09:24.607 sys 0m0.035s 00:09:24.607 00:23:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:24.607 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.607 ************************************ 00:09:24.607 END TEST rpc_integrity 00:09:24.607 ************************************ 00:09:24.865 00:23:18 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:24.865 00:23:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:24.865 00:23:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.865 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.865 ************************************ 00:09:24.865 START TEST rpc_plugins 00:09:24.865 ************************************ 00:09:24.865 00:23:18 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:09:24.865 00:23:18 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:24.865 00:23:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.865 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.865 00:23:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.865 00:23:18 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:24.865 00:23:18 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:24.865 00:23:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.865 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.865 00:23:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.865 00:23:18 -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:24.865 { 00:09:24.865 "name": "Malloc1", 00:09:24.865 "aliases": [ 00:09:24.865 "4f52de55-dbfc-42ec-b386-195f9a1947ba" 00:09:24.865 ], 00:09:24.865 "product_name": "Malloc disk", 00:09:24.865 "block_size": 4096, 00:09:24.865 "num_blocks": 256, 00:09:24.865 "uuid": "4f52de55-dbfc-42ec-b386-195f9a1947ba", 00:09:24.865 "assigned_rate_limits": { 00:09:24.865 "rw_ios_per_sec": 0, 00:09:24.865 "rw_mbytes_per_sec": 0, 00:09:24.865 "r_mbytes_per_sec": 0, 00:09:24.865 "w_mbytes_per_sec": 0 00:09:24.865 }, 00:09:24.865 "claimed": false, 00:09:24.865 "zoned": false, 00:09:24.865 "supported_io_types": { 00:09:24.865 "read": true, 00:09:24.865 "write": true, 00:09:24.865 "unmap": true, 00:09:24.865 "write_zeroes": true, 00:09:24.865 "flush": true, 00:09:24.865 "reset": true, 00:09:24.865 "compare": false, 00:09:24.865 "compare_and_write": false, 00:09:24.865 "abort": true, 00:09:24.865 "nvme_admin": false, 00:09:24.865 "nvme_io": false 00:09:24.865 }, 00:09:24.865 "memory_domains": [ 00:09:24.865 { 00:09:24.865 "dma_device_id": "system", 00:09:24.865 "dma_device_type": 1 00:09:24.865 }, 00:09:24.865 { 00:09:24.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.865 "dma_device_type": 2 00:09:24.865 } 00:09:24.865 ], 00:09:24.865 "driver_specific": {} 00:09:24.865 } 00:09:24.865 ]' 00:09:24.865 00:23:18 -- rpc/rpc.sh@32 -- # jq length 00:09:24.865 00:23:18 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:24.865 00:23:18 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:24.865 00:23:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.865 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.865 00:23:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.865 00:23:18 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:24.865 00:23:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.865 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.865 00:23:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.865 00:23:18 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:24.865 00:23:18 -- rpc/rpc.sh@36 -- # jq length 00:09:24.865 00:23:18 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:24.865 00:09:24.865 real 0m0.161s 00:09:24.865 user 0m0.103s 00:09:24.865 sys 0m0.016s 00:09:24.865 00:23:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:24.865 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:24.865 ************************************ 00:09:24.865 END TEST rpc_plugins 00:09:24.865 ************************************ 00:09:25.122 00:23:18 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:25.122 00:23:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:25.122 00:23:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:25.122 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:25.122 ************************************ 00:09:25.122 START TEST rpc_trace_cmd_test 00:09:25.122 ************************************ 00:09:25.122 00:23:18 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:09:25.122 00:23:18 -- rpc/rpc.sh@40 -- # local info 00:09:25.122 00:23:18 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:25.122 00:23:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.122 00:23:18 -- common/autotest_common.sh@10 -- # set +x 00:09:25.122 00:23:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.122 00:23:18 -- rpc/rpc.sh@42 -- # info='{ 00:09:25.122 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid110354", 00:09:25.122 "tpoint_group_mask": "0x8", 00:09:25.122 "iscsi_conn": { 00:09:25.122 "mask": "0x2", 00:09:25.122 "tpoint_mask": "0x0" 00:09:25.122 }, 00:09:25.122 "scsi": { 00:09:25.122 "mask": "0x4", 00:09:25.122 "tpoint_mask": "0x0" 00:09:25.122 }, 00:09:25.122 "bdev": { 00:09:25.122 "mask": "0x8", 00:09:25.122 "tpoint_mask": "0xffffffffffffffff" 00:09:25.122 }, 00:09:25.122 "nvmf_rdma": { 00:09:25.122 "mask": "0x10", 00:09:25.122 "tpoint_mask": "0x0" 00:09:25.122 }, 00:09:25.122 "nvmf_tcp": { 00:09:25.122 "mask": "0x20", 00:09:25.122 "tpoint_mask": "0x0" 00:09:25.122 }, 00:09:25.123 "ftl": { 00:09:25.123 "mask": "0x40", 00:09:25.123 "tpoint_mask": "0x0" 00:09:25.123 }, 00:09:25.123 "blobfs": { 00:09:25.123 "mask": "0x80", 00:09:25.123 "tpoint_mask": "0x0" 00:09:25.123 }, 00:09:25.123 "dsa": { 00:09:25.123 "mask": "0x200", 00:09:25.123 "tpoint_mask": "0x0" 00:09:25.123 }, 00:09:25.123 "thread": { 00:09:25.123 "mask": "0x400", 00:09:25.123 "tpoint_mask": "0x0" 00:09:25.123 }, 00:09:25.123 "nvme_pcie": { 00:09:25.123 "mask": "0x800", 00:09:25.123 "tpoint_mask": "0x0" 00:09:25.123 }, 00:09:25.123 "iaa": { 00:09:25.123 "mask": "0x1000", 00:09:25.123 "tpoint_mask": "0x0" 00:09:25.123 }, 00:09:25.123 "nvme_tcp": { 00:09:25.123 "mask": "0x2000", 00:09:25.123 "tpoint_mask": "0x0" 00:09:25.123 }, 00:09:25.123 "bdev_nvme": { 00:09:25.123 "mask": "0x4000", 00:09:25.123 "tpoint_mask": "0x0" 00:09:25.123 }, 00:09:25.123 "sock": { 00:09:25.123 "mask": "0x8000", 00:09:25.123 "tpoint_mask": "0x0" 00:09:25.123 } 00:09:25.123 }' 00:09:25.123 00:23:18 -- rpc/rpc.sh@43 -- # jq length 00:09:25.123 00:23:18 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:25.123 00:23:18 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:25.123 00:23:18 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:25.123 00:23:18 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:25.453 00:23:18 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:25.453 00:23:18 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:25.453 00:23:18 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:25.453 00:23:18 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:25.453 00:23:19 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:25.453 00:09:25.453 real 0m0.269s 00:09:25.453 user 0m0.226s 00:09:25.453 sys 0m0.035s 00:09:25.453 00:23:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:25.453 00:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:25.453 ************************************ 00:09:25.453 END TEST rpc_trace_cmd_test 00:09:25.453 ************************************ 00:09:25.453 00:23:19 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:25.453 00:23:19 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:25.453 00:23:19 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:25.453 00:23:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:25.453 00:23:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:25.453 00:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:25.453 ************************************ 00:09:25.453 START TEST rpc_daemon_integrity 00:09:25.453 ************************************ 00:09:25.453 00:23:19 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:09:25.453 00:23:19 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:25.453 00:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.453 00:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:25.453 00:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.453 00:23:19 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:25.453 00:23:19 -- rpc/rpc.sh@13 -- # jq length 00:09:25.454 00:23:19 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:25.454 00:23:19 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:25.454 00:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.454 00:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:25.712 00:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.712 00:23:19 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:25.712 00:23:19 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:25.712 00:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.712 00:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:25.712 00:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.712 00:23:19 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:25.712 { 00:09:25.712 "name": "Malloc2", 00:09:25.712 "aliases": [ 00:09:25.712 "7548eb6d-63e6-48cf-a2e5-590fb1291e94" 00:09:25.712 ], 00:09:25.712 "product_name": "Malloc disk", 00:09:25.712 "block_size": 512, 00:09:25.712 "num_blocks": 16384, 00:09:25.712 "uuid": "7548eb6d-63e6-48cf-a2e5-590fb1291e94", 00:09:25.712 "assigned_rate_limits": { 00:09:25.712 "rw_ios_per_sec": 0, 00:09:25.712 "rw_mbytes_per_sec": 0, 00:09:25.712 "r_mbytes_per_sec": 0, 00:09:25.712 "w_mbytes_per_sec": 0 00:09:25.712 }, 00:09:25.712 "claimed": false, 00:09:25.712 "zoned": false, 00:09:25.712 "supported_io_types": { 00:09:25.712 "read": true, 00:09:25.712 "write": true, 00:09:25.712 "unmap": true, 00:09:25.712 "write_zeroes": true, 00:09:25.712 "flush": true, 00:09:25.712 "reset": true, 00:09:25.712 "compare": false, 00:09:25.712 "compare_and_write": false, 00:09:25.712 "abort": true, 00:09:25.712 "nvme_admin": false, 00:09:25.712 "nvme_io": false 00:09:25.712 }, 00:09:25.712 "memory_domains": [ 00:09:25.712 { 00:09:25.712 "dma_device_id": "system", 00:09:25.712 "dma_device_type": 1 00:09:25.712 }, 00:09:25.712 { 00:09:25.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.712 "dma_device_type": 2 00:09:25.712 } 00:09:25.712 ], 00:09:25.712 "driver_specific": {} 00:09:25.712 } 00:09:25.712 ]' 00:09:25.712 00:23:19 -- rpc/rpc.sh@17 -- # jq length 00:09:25.712 00:23:19 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:25.712 00:23:19 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:25.712 00:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.712 00:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:25.712 [2024-04-24 00:23:19.277627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:25.712 [2024-04-24 00:23:19.278073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.712 [2024-04-24 00:23:19.278222] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:25.712 [2024-04-24 00:23:19.278337] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.712 [2024-04-24 00:23:19.281137] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.712 [2024-04-24 00:23:19.281313] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:25.712 Passthru0 00:09:25.712 00:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.712 00:23:19 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:25.712 00:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.712 00:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:25.712 00:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.712 00:23:19 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:25.712 { 00:09:25.712 "name": "Malloc2", 00:09:25.712 "aliases": [ 00:09:25.712 "7548eb6d-63e6-48cf-a2e5-590fb1291e94" 00:09:25.712 ], 00:09:25.712 "product_name": "Malloc disk", 00:09:25.712 "block_size": 512, 00:09:25.712 "num_blocks": 16384, 00:09:25.712 "uuid": "7548eb6d-63e6-48cf-a2e5-590fb1291e94", 00:09:25.712 "assigned_rate_limits": { 00:09:25.712 "rw_ios_per_sec": 0, 00:09:25.712 "rw_mbytes_per_sec": 0, 00:09:25.712 "r_mbytes_per_sec": 0, 00:09:25.712 "w_mbytes_per_sec": 0 00:09:25.712 }, 00:09:25.712 "claimed": true, 00:09:25.712 "claim_type": "exclusive_write", 00:09:25.712 "zoned": false, 00:09:25.712 "supported_io_types": { 00:09:25.713 "read": true, 00:09:25.713 "write": true, 00:09:25.713 "unmap": true, 00:09:25.713 "write_zeroes": true, 00:09:25.713 "flush": true, 00:09:25.713 "reset": true, 00:09:25.713 "compare": false, 00:09:25.713 "compare_and_write": false, 00:09:25.713 "abort": true, 00:09:25.713 "nvme_admin": false, 00:09:25.713 "nvme_io": false 00:09:25.713 }, 00:09:25.713 "memory_domains": [ 00:09:25.713 { 00:09:25.713 "dma_device_id": "system", 00:09:25.713 "dma_device_type": 1 00:09:25.713 }, 00:09:25.713 { 00:09:25.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.713 "dma_device_type": 2 00:09:25.713 } 00:09:25.713 ], 00:09:25.713 "driver_specific": {} 00:09:25.713 }, 00:09:25.713 { 00:09:25.713 "name": "Passthru0", 00:09:25.713 "aliases": [ 00:09:25.713 "bd996102-a596-50d0-bc78-730d77c83fe6" 00:09:25.713 ], 00:09:25.713 "product_name": "passthru", 00:09:25.713 "block_size": 512, 00:09:25.713 "num_blocks": 16384, 00:09:25.713 "uuid": "bd996102-a596-50d0-bc78-730d77c83fe6", 00:09:25.713 "assigned_rate_limits": { 00:09:25.713 "rw_ios_per_sec": 0, 00:09:25.713 "rw_mbytes_per_sec": 0, 00:09:25.713 "r_mbytes_per_sec": 0, 00:09:25.713 "w_mbytes_per_sec": 0 00:09:25.713 }, 00:09:25.713 "claimed": false, 00:09:25.713 "zoned": false, 00:09:25.713 "supported_io_types": { 00:09:25.713 "read": true, 00:09:25.713 "write": true, 00:09:25.713 "unmap": true, 00:09:25.713 "write_zeroes": true, 00:09:25.713 "flush": true, 00:09:25.713 "reset": true, 00:09:25.713 "compare": false, 00:09:25.713 "compare_and_write": false, 00:09:25.713 "abort": true, 00:09:25.713 "nvme_admin": false, 00:09:25.713 "nvme_io": false 00:09:25.713 }, 00:09:25.713 "memory_domains": [ 00:09:25.713 { 00:09:25.713 "dma_device_id": "system", 00:09:25.713 "dma_device_type": 1 00:09:25.713 }, 00:09:25.713 { 00:09:25.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.713 "dma_device_type": 2 00:09:25.713 } 00:09:25.713 ], 00:09:25.713 "driver_specific": { 00:09:25.713 "passthru": { 00:09:25.713 "name": "Passthru0", 00:09:25.713 "base_bdev_name": "Malloc2" 00:09:25.713 } 00:09:25.713 } 00:09:25.713 } 00:09:25.713 ]' 00:09:25.713 00:23:19 -- rpc/rpc.sh@21 -- # jq length 00:09:25.713 00:23:19 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:25.713 00:23:19 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:25.713 00:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.713 00:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:25.713 00:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.713 00:23:19 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:25.713 00:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.713 00:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:25.713 00:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.713 00:23:19 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:25.713 00:23:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:25.713 00:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:25.713 00:23:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:25.713 00:23:19 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:25.713 00:23:19 -- rpc/rpc.sh@26 -- # jq length 00:09:25.713 00:23:19 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:25.713 00:09:25.713 real 0m0.345s 00:09:25.713 user 0m0.195s 00:09:25.713 sys 0m0.042s 00:09:25.713 00:23:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:25.713 00:23:19 -- common/autotest_common.sh@10 -- # set +x 00:09:25.713 ************************************ 00:09:25.713 END TEST rpc_daemon_integrity 00:09:25.713 ************************************ 00:09:25.970 00:23:19 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:25.970 00:23:19 -- rpc/rpc.sh@84 -- # killprocess 110354 00:09:25.970 00:23:19 -- common/autotest_common.sh@936 -- # '[' -z 110354 ']' 00:09:25.970 00:23:19 -- common/autotest_common.sh@940 -- # kill -0 110354 00:09:25.970 00:23:19 -- common/autotest_common.sh@941 -- # uname 00:09:25.970 00:23:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:25.970 00:23:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110354 00:09:25.970 00:23:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:25.970 00:23:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:25.970 00:23:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110354' 00:09:25.970 killing process with pid 110354 00:09:25.970 00:23:19 -- common/autotest_common.sh@955 -- # kill 110354 00:09:25.970 00:23:19 -- common/autotest_common.sh@960 -- # wait 110354 00:09:28.556 ************************************ 00:09:28.556 END TEST rpc 00:09:28.556 ************************************ 00:09:28.556 00:09:28.556 real 0m6.015s 00:09:28.556 user 0m6.786s 00:09:28.556 sys 0m0.967s 00:09:28.556 00:23:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:28.556 00:23:22 -- common/autotest_common.sh@10 -- # set +x 00:09:28.813 00:23:22 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:28.813 00:23:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:28.813 00:23:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.813 00:23:22 -- common/autotest_common.sh@10 -- # set +x 00:09:28.813 ************************************ 00:09:28.813 START TEST skip_rpc 00:09:28.813 ************************************ 00:09:28.813 00:23:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:28.813 * Looking for test storage... 00:09:28.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:28.813 00:23:22 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:28.813 00:23:22 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:28.813 00:23:22 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:28.813 00:23:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:28.813 00:23:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.813 00:23:22 -- common/autotest_common.sh@10 -- # set +x 00:09:28.813 ************************************ 00:09:28.813 START TEST skip_rpc 00:09:28.814 ************************************ 00:09:28.814 00:23:22 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:09:28.814 00:23:22 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=110643 00:09:28.814 00:23:22 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:28.814 00:23:22 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:28.814 00:23:22 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:29.071 [2024-04-24 00:23:22.700041] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:09:29.071 [2024-04-24 00:23:22.700290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110643 ] 00:09:29.329 [2024-04-24 00:23:22.886073] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.586 [2024-04-24 00:23:23.189257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.850 00:23:27 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:34.850 00:23:27 -- common/autotest_common.sh@638 -- # local es=0 00:09:34.850 00:23:27 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:34.850 00:23:27 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:09:34.850 00:23:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:34.850 00:23:27 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:09:34.850 00:23:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:34.850 00:23:27 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:09:34.850 00:23:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:34.850 00:23:27 -- common/autotest_common.sh@10 -- # set +x 00:09:34.850 00:23:27 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:09:34.850 00:23:27 -- common/autotest_common.sh@641 -- # es=1 00:09:34.850 00:23:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:34.850 00:23:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:34.850 00:23:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:34.850 00:23:27 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:34.850 00:23:27 -- rpc/skip_rpc.sh@23 -- # killprocess 110643 00:09:34.850 00:23:27 -- common/autotest_common.sh@936 -- # '[' -z 110643 ']' 00:09:34.850 00:23:27 -- common/autotest_common.sh@940 -- # kill -0 110643 00:09:34.850 00:23:27 -- common/autotest_common.sh@941 -- # uname 00:09:34.850 00:23:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:34.850 00:23:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110643 00:09:34.850 00:23:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:34.850 killing process with pid 110643 00:09:34.850 00:23:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:34.850 00:23:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110643' 00:09:34.850 00:23:27 -- common/autotest_common.sh@955 -- # kill 110643 00:09:34.850 00:23:27 -- common/autotest_common.sh@960 -- # wait 110643 00:09:36.748 00:09:36.748 real 0m7.790s 00:09:36.748 user 0m7.281s 00:09:36.748 sys 0m0.431s 00:09:36.748 00:23:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:36.748 ************************************ 00:09:36.748 END TEST skip_rpc 00:09:36.748 ************************************ 00:09:36.748 00:23:30 -- common/autotest_common.sh@10 -- # set +x 00:09:36.748 00:23:30 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:36.748 00:23:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:36.748 00:23:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.748 00:23:30 -- common/autotest_common.sh@10 -- # set +x 00:09:36.748 ************************************ 00:09:36.748 START TEST skip_rpc_with_json 00:09:36.748 ************************************ 00:09:36.748 00:23:30 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:09:36.748 00:23:30 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:36.748 00:23:30 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=110776 00:09:36.748 00:23:30 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:36.748 00:23:30 -- rpc/skip_rpc.sh@31 -- # waitforlisten 110776 00:09:36.748 00:23:30 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:36.749 00:23:30 -- common/autotest_common.sh@817 -- # '[' -z 110776 ']' 00:09:36.749 00:23:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.749 00:23:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:36.749 00:23:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.749 00:23:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:36.749 00:23:30 -- common/autotest_common.sh@10 -- # set +x 00:09:37.006 [2024-04-24 00:23:30.572053] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:09:37.006 [2024-04-24 00:23:30.572464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110776 ] 00:09:37.006 [2024-04-24 00:23:30.749300] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.263 [2024-04-24 00:23:31.021347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.634 00:23:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:38.634 00:23:32 -- common/autotest_common.sh@850 -- # return 0 00:09:38.634 00:23:32 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:38.634 00:23:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.634 00:23:32 -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 [2024-04-24 00:23:32.072209] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:38.634 request: 00:09:38.634 { 00:09:38.634 "trtype": "tcp", 00:09:38.634 "method": "nvmf_get_transports", 00:09:38.634 "req_id": 1 00:09:38.634 } 00:09:38.634 Got JSON-RPC error response 00:09:38.634 response: 00:09:38.634 { 00:09:38.634 "code": -19, 00:09:38.634 "message": "No such device" 00:09:38.634 } 00:09:38.634 00:23:32 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:09:38.634 00:23:32 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:38.634 00:23:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.634 00:23:32 -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 [2024-04-24 00:23:32.080311] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.634 00:23:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.634 00:23:32 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:38.634 00:23:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:38.634 00:23:32 -- common/autotest_common.sh@10 -- # set +x 00:09:38.634 00:23:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:38.634 00:23:32 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:38.634 { 00:09:38.634 "subsystems": [ 00:09:38.634 { 00:09:38.634 "subsystem": "scheduler", 00:09:38.634 "config": [ 00:09:38.634 { 00:09:38.634 "method": "framework_set_scheduler", 00:09:38.634 "params": { 00:09:38.634 "name": "static" 00:09:38.634 } 00:09:38.634 } 00:09:38.634 ] 00:09:38.634 }, 00:09:38.634 { 00:09:38.634 "subsystem": "vmd", 00:09:38.634 "config": [] 00:09:38.634 }, 00:09:38.634 { 00:09:38.634 "subsystem": "sock", 00:09:38.634 "config": [ 00:09:38.634 { 00:09:38.634 "method": "sock_impl_set_options", 00:09:38.634 "params": { 00:09:38.634 "impl_name": "posix", 00:09:38.634 "recv_buf_size": 2097152, 00:09:38.634 "send_buf_size": 2097152, 00:09:38.634 "enable_recv_pipe": true, 00:09:38.634 "enable_quickack": false, 00:09:38.634 "enable_placement_id": 0, 00:09:38.634 "enable_zerocopy_send_server": true, 00:09:38.634 "enable_zerocopy_send_client": false, 00:09:38.634 "zerocopy_threshold": 0, 00:09:38.634 "tls_version": 0, 00:09:38.634 "enable_ktls": false 00:09:38.634 } 00:09:38.634 }, 00:09:38.634 { 00:09:38.634 "method": "sock_impl_set_options", 00:09:38.634 "params": { 00:09:38.634 "impl_name": "ssl", 00:09:38.634 "recv_buf_size": 4096, 00:09:38.634 "send_buf_size": 4096, 00:09:38.634 "enable_recv_pipe": true, 00:09:38.634 "enable_quickack": false, 00:09:38.634 "enable_placement_id": 0, 00:09:38.634 "enable_zerocopy_send_server": true, 00:09:38.634 "enable_zerocopy_send_client": false, 00:09:38.634 "zerocopy_threshold": 0, 00:09:38.634 "tls_version": 0, 00:09:38.634 "enable_ktls": false 00:09:38.634 } 00:09:38.634 } 00:09:38.634 ] 00:09:38.634 }, 00:09:38.634 { 00:09:38.634 "subsystem": "iobuf", 00:09:38.634 "config": [ 00:09:38.634 { 00:09:38.634 "method": "iobuf_set_options", 00:09:38.634 "params": { 00:09:38.634 "small_pool_count": 8192, 00:09:38.634 "large_pool_count": 1024, 00:09:38.634 "small_bufsize": 8192, 00:09:38.634 "large_bufsize": 135168 00:09:38.634 } 00:09:38.634 } 00:09:38.634 ] 00:09:38.634 }, 00:09:38.634 { 00:09:38.634 "subsystem": "keyring", 00:09:38.634 "config": [] 00:09:38.634 }, 00:09:38.634 { 00:09:38.634 "subsystem": "accel", 00:09:38.634 "config": [ 00:09:38.634 { 00:09:38.634 "method": "accel_set_options", 00:09:38.634 "params": { 00:09:38.634 "small_cache_size": 128, 00:09:38.634 "large_cache_size": 16, 00:09:38.634 "task_count": 2048, 00:09:38.634 "sequence_count": 2048, 00:09:38.634 "buf_count": 2048 00:09:38.634 } 00:09:38.634 } 00:09:38.634 ] 00:09:38.634 }, 00:09:38.634 { 00:09:38.634 "subsystem": "bdev", 00:09:38.634 "config": [ 00:09:38.634 { 00:09:38.634 "method": "bdev_set_options", 00:09:38.634 "params": { 00:09:38.634 "bdev_io_pool_size": 65535, 00:09:38.634 "bdev_io_cache_size": 256, 00:09:38.634 "bdev_auto_examine": true, 00:09:38.634 "iobuf_small_cache_size": 128, 00:09:38.634 "iobuf_large_cache_size": 16 00:09:38.634 } 00:09:38.634 }, 00:09:38.634 { 00:09:38.634 "method": "bdev_raid_set_options", 00:09:38.634 "params": { 00:09:38.634 "process_window_size_kb": 1024 00:09:38.634 } 00:09:38.634 }, 00:09:38.634 { 00:09:38.634 "method": "bdev_nvme_set_options", 00:09:38.634 "params": { 00:09:38.634 "action_on_timeout": "none", 00:09:38.635 "timeout_us": 0, 00:09:38.635 "timeout_admin_us": 0, 00:09:38.635 "keep_alive_timeout_ms": 10000, 00:09:38.635 "arbitration_burst": 0, 00:09:38.635 "low_priority_weight": 0, 00:09:38.635 "medium_priority_weight": 0, 00:09:38.635 "high_priority_weight": 0, 00:09:38.635 "nvme_adminq_poll_period_us": 10000, 00:09:38.635 "nvme_ioq_poll_period_us": 0, 00:09:38.635 "io_queue_requests": 0, 00:09:38.635 "delay_cmd_submit": true, 00:09:38.635 "transport_retry_count": 4, 00:09:38.635 "bdev_retry_count": 3, 00:09:38.635 "transport_ack_timeout": 0, 00:09:38.635 "ctrlr_loss_timeout_sec": 0, 00:09:38.635 "reconnect_delay_sec": 0, 00:09:38.635 "fast_io_fail_timeout_sec": 0, 00:09:38.635 "disable_auto_failback": false, 00:09:38.635 "generate_uuids": false, 00:09:38.635 "transport_tos": 0, 00:09:38.635 "nvme_error_stat": false, 00:09:38.635 "rdma_srq_size": 0, 00:09:38.635 "io_path_stat": false, 00:09:38.635 "allow_accel_sequence": false, 00:09:38.635 "rdma_max_cq_size": 0, 00:09:38.635 "rdma_cm_event_timeout_ms": 0, 00:09:38.635 "dhchap_digests": [ 00:09:38.635 "sha256", 00:09:38.635 "sha384", 00:09:38.635 "sha512" 00:09:38.635 ], 00:09:38.635 "dhchap_dhgroups": [ 00:09:38.635 "null", 00:09:38.635 "ffdhe2048", 00:09:38.635 "ffdhe3072", 00:09:38.635 "ffdhe4096", 00:09:38.635 "ffdhe6144", 00:09:38.635 "ffdhe8192" 00:09:38.635 ] 00:09:38.635 } 00:09:38.635 }, 00:09:38.635 { 00:09:38.635 "method": "bdev_nvme_set_hotplug", 00:09:38.635 "params": { 00:09:38.635 "period_us": 100000, 00:09:38.635 "enable": false 00:09:38.635 } 00:09:38.635 }, 00:09:38.635 { 00:09:38.635 "method": "bdev_iscsi_set_options", 00:09:38.635 "params": { 00:09:38.635 "timeout_sec": 30 00:09:38.635 } 00:09:38.635 }, 00:09:38.635 { 00:09:38.635 "method": "bdev_wait_for_examine" 00:09:38.635 } 00:09:38.635 ] 00:09:38.635 }, 00:09:38.635 { 00:09:38.635 "subsystem": "nvmf", 00:09:38.635 "config": [ 00:09:38.635 { 00:09:38.635 "method": "nvmf_set_config", 00:09:38.635 "params": { 00:09:38.635 "discovery_filter": "match_any", 00:09:38.635 "admin_cmd_passthru": { 00:09:38.635 "identify_ctrlr": false 00:09:38.635 } 00:09:38.635 } 00:09:38.635 }, 00:09:38.635 { 00:09:38.635 "method": "nvmf_set_max_subsystems", 00:09:38.635 "params": { 00:09:38.635 "max_subsystems": 1024 00:09:38.635 } 00:09:38.635 }, 00:09:38.635 { 00:09:38.635 "method": "nvmf_set_crdt", 00:09:38.635 "params": { 00:09:38.635 "crdt1": 0, 00:09:38.635 "crdt2": 0, 00:09:38.635 "crdt3": 0 00:09:38.635 } 00:09:38.635 }, 00:09:38.635 { 00:09:38.635 "method": "nvmf_create_transport", 00:09:38.635 "params": { 00:09:38.635 "trtype": "TCP", 00:09:38.635 "max_queue_depth": 128, 00:09:38.635 "max_io_qpairs_per_ctrlr": 127, 00:09:38.635 "in_capsule_data_size": 4096, 00:09:38.635 "max_io_size": 131072, 00:09:38.635 "io_unit_size": 131072, 00:09:38.635 "max_aq_depth": 128, 00:09:38.635 "num_shared_buffers": 511, 00:09:38.635 "buf_cache_size": 4294967295, 00:09:38.635 "dif_insert_or_strip": false, 00:09:38.635 "zcopy": false, 00:09:38.635 "c2h_success": true, 00:09:38.635 "sock_priority": 0, 00:09:38.635 "abort_timeout_sec": 1, 00:09:38.635 "ack_timeout": 0 00:09:38.635 } 00:09:38.635 } 00:09:38.635 ] 00:09:38.635 }, 00:09:38.635 { 00:09:38.635 "subsystem": "nbd", 00:09:38.635 "config": [] 00:09:38.635 }, 00:09:38.635 { 00:09:38.635 "subsystem": "vhost_blk", 00:09:38.635 "config": [] 00:09:38.635 }, 00:09:38.635 { 00:09:38.635 "subsystem": "scsi", 00:09:38.635 "config": null 00:09:38.635 }, 00:09:38.635 { 00:09:38.635 "subsystem": "iscsi", 00:09:38.635 "config": [ 00:09:38.635 { 00:09:38.635 "method": "iscsi_set_options", 00:09:38.635 "params": { 00:09:38.635 "node_base": "iqn.2016-06.io.spdk", 00:09:38.635 "max_sessions": 128, 00:09:38.635 "max_connections_per_session": 2, 00:09:38.635 "max_queue_depth": 64, 00:09:38.635 "default_time2wait": 2, 00:09:38.635 "default_time2retain": 20, 00:09:38.635 "first_burst_length": 8192, 00:09:38.635 "immediate_data": true, 00:09:38.635 "allow_duplicated_isid": false, 00:09:38.635 "error_recovery_level": 0, 00:09:38.635 "nop_timeout": 60, 00:09:38.635 "nop_in_interval": 30, 00:09:38.635 "disable_chap": false, 00:09:38.635 "require_chap": false, 00:09:38.635 "mutual_chap": false, 00:09:38.635 "chap_group": 0, 00:09:38.635 "max_large_datain_per_connection": 64, 00:09:38.635 "max_r2t_per_connection": 4, 00:09:38.635 "pdu_pool_size": 36864, 00:09:38.635 "immediate_data_pool_size": 16384, 00:09:38.635 "data_out_pool_size": 2048 00:09:38.635 } 00:09:38.635 } 00:09:38.635 ] 00:09:38.635 }, 00:09:38.635 { 00:09:38.635 "subsystem": "vhost_scsi", 00:09:38.635 "config": [] 00:09:38.635 } 00:09:38.635 ] 00:09:38.635 } 00:09:38.635 00:23:32 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:38.635 00:23:32 -- rpc/skip_rpc.sh@40 -- # killprocess 110776 00:09:38.635 00:23:32 -- common/autotest_common.sh@936 -- # '[' -z 110776 ']' 00:09:38.635 00:23:32 -- common/autotest_common.sh@940 -- # kill -0 110776 00:09:38.635 00:23:32 -- common/autotest_common.sh@941 -- # uname 00:09:38.635 00:23:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:38.635 00:23:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110776 00:09:38.635 00:23:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:38.635 00:23:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:38.635 killing process with pid 110776 00:09:38.635 00:23:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110776' 00:09:38.635 00:23:32 -- common/autotest_common.sh@955 -- # kill 110776 00:09:38.635 00:23:32 -- common/autotest_common.sh@960 -- # wait 110776 00:09:41.910 00:23:35 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=110840 00:09:41.910 00:23:35 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:41.910 00:23:35 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:47.172 00:23:40 -- rpc/skip_rpc.sh@50 -- # killprocess 110840 00:09:47.172 00:23:40 -- common/autotest_common.sh@936 -- # '[' -z 110840 ']' 00:09:47.172 00:23:40 -- common/autotest_common.sh@940 -- # kill -0 110840 00:09:47.172 00:23:40 -- common/autotest_common.sh@941 -- # uname 00:09:47.172 00:23:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:47.172 00:23:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110840 00:09:47.172 00:23:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:47.172 00:23:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:47.172 killing process with pid 110840 00:09:47.172 00:23:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110840' 00:09:47.172 00:23:40 -- common/autotest_common.sh@955 -- # kill 110840 00:09:47.172 00:23:40 -- common/autotest_common.sh@960 -- # wait 110840 00:09:49.072 00:23:42 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:49.330 00:23:42 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:49.330 00:09:49.330 real 0m12.384s 00:09:49.330 user 0m11.973s 00:09:49.330 sys 0m0.810s 00:09:49.330 00:23:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:49.330 00:23:42 -- common/autotest_common.sh@10 -- # set +x 00:09:49.330 ************************************ 00:09:49.330 END TEST skip_rpc_with_json 00:09:49.330 ************************************ 00:09:49.330 00:23:42 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:49.330 00:23:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:49.330 00:23:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:49.330 00:23:42 -- common/autotest_common.sh@10 -- # set +x 00:09:49.330 ************************************ 00:09:49.330 START TEST skip_rpc_with_delay 00:09:49.330 ************************************ 00:09:49.330 00:23:42 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:09:49.330 00:23:42 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:49.330 00:23:42 -- common/autotest_common.sh@638 -- # local es=0 00:09:49.330 00:23:42 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:49.330 00:23:42 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:49.330 00:23:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:49.330 00:23:42 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:49.330 00:23:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:49.330 00:23:42 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:49.330 00:23:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:49.330 00:23:42 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:49.330 00:23:42 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:49.330 00:23:42 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:49.330 [2024-04-24 00:23:43.043736] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:49.330 [2024-04-24 00:23:43.043956] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:49.330 00:23:43 -- common/autotest_common.sh@641 -- # es=1 00:09:49.330 00:23:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:49.330 00:23:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:49.330 00:23:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:49.330 00:09:49.330 real 0m0.142s 00:09:49.330 user 0m0.059s 00:09:49.330 sys 0m0.083s 00:09:49.330 00:23:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:49.330 00:23:43 -- common/autotest_common.sh@10 -- # set +x 00:09:49.330 ************************************ 00:09:49.330 END TEST skip_rpc_with_delay 00:09:49.330 ************************************ 00:09:49.587 00:23:43 -- rpc/skip_rpc.sh@77 -- # uname 00:09:49.587 00:23:43 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:49.587 00:23:43 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:49.587 00:23:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:49.587 00:23:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:49.587 00:23:43 -- common/autotest_common.sh@10 -- # set +x 00:09:49.587 ************************************ 00:09:49.587 START TEST exit_on_failed_rpc_init 00:09:49.587 ************************************ 00:09:49.587 00:23:43 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:09:49.587 00:23:43 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=110999 00:09:49.587 00:23:43 -- rpc/skip_rpc.sh@63 -- # waitforlisten 110999 00:09:49.587 00:23:43 -- common/autotest_common.sh@817 -- # '[' -z 110999 ']' 00:09:49.587 00:23:43 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:49.588 00:23:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.588 00:23:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:49.588 00:23:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.588 00:23:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:49.588 00:23:43 -- common/autotest_common.sh@10 -- # set +x 00:09:49.588 [2024-04-24 00:23:43.317831] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:09:49.588 [2024-04-24 00:23:43.318051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110999 ] 00:09:49.846 [2024-04-24 00:23:43.493523] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.134 [2024-04-24 00:23:43.729829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.088 00:23:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:51.088 00:23:44 -- common/autotest_common.sh@850 -- # return 0 00:09:51.088 00:23:44 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:51.088 00:23:44 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:51.088 00:23:44 -- common/autotest_common.sh@638 -- # local es=0 00:09:51.088 00:23:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:51.088 00:23:44 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:51.088 00:23:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:51.088 00:23:44 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:51.088 00:23:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:51.088 00:23:44 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:51.088 00:23:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:51.088 00:23:44 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:51.088 00:23:44 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:51.088 00:23:44 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:51.088 [2024-04-24 00:23:44.827087] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:09:51.088 [2024-04-24 00:23:44.827296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111024 ] 00:09:51.346 [2024-04-24 00:23:45.010073] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.602 [2024-04-24 00:23:45.257085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.602 [2024-04-24 00:23:45.257204] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:51.602 [2024-04-24 00:23:45.257238] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:51.602 [2024-04-24 00:23:45.257260] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:52.165 00:23:45 -- common/autotest_common.sh@641 -- # es=234 00:09:52.165 00:23:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:52.165 00:23:45 -- common/autotest_common.sh@650 -- # es=106 00:09:52.165 00:23:45 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:52.165 00:23:45 -- common/autotest_common.sh@658 -- # es=1 00:09:52.165 00:23:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:52.165 00:23:45 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:52.165 00:23:45 -- rpc/skip_rpc.sh@70 -- # killprocess 110999 00:09:52.165 00:23:45 -- common/autotest_common.sh@936 -- # '[' -z 110999 ']' 00:09:52.165 00:23:45 -- common/autotest_common.sh@940 -- # kill -0 110999 00:09:52.165 00:23:45 -- common/autotest_common.sh@941 -- # uname 00:09:52.165 00:23:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:52.165 00:23:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110999 00:09:52.165 00:23:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:52.165 00:23:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:52.165 killing process with pid 110999 00:09:52.165 00:23:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110999' 00:09:52.165 00:23:45 -- common/autotest_common.sh@955 -- # kill 110999 00:09:52.166 00:23:45 -- common/autotest_common.sh@960 -- # wait 110999 00:09:55.446 00:09:55.446 real 0m5.341s 00:09:55.446 user 0m6.088s 00:09:55.446 sys 0m0.602s 00:09:55.446 00:23:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:55.446 00:23:48 -- common/autotest_common.sh@10 -- # set +x 00:09:55.446 ************************************ 00:09:55.446 END TEST exit_on_failed_rpc_init 00:09:55.446 ************************************ 00:09:55.446 00:23:48 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:55.446 00:09:55.446 real 0m26.181s 00:09:55.446 user 0m25.633s 00:09:55.446 sys 0m2.232s 00:09:55.446 00:23:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:55.446 ************************************ 00:09:55.446 END TEST skip_rpc 00:09:55.446 00:23:48 -- common/autotest_common.sh@10 -- # set +x 00:09:55.446 ************************************ 00:09:55.446 00:23:48 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:55.446 00:23:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:55.446 00:23:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.446 00:23:48 -- common/autotest_common.sh@10 -- # set +x 00:09:55.446 ************************************ 00:09:55.446 START TEST rpc_client 00:09:55.446 ************************************ 00:09:55.446 00:23:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:55.446 * Looking for test storage... 00:09:55.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:55.446 00:23:48 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:55.446 OK 00:09:55.446 00:23:48 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:55.446 00:09:55.446 real 0m0.181s 00:09:55.446 user 0m0.119s 00:09:55.446 sys 0m0.072s 00:09:55.446 00:23:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:55.446 00:23:48 -- common/autotest_common.sh@10 -- # set +x 00:09:55.446 ************************************ 00:09:55.446 END TEST rpc_client 00:09:55.446 ************************************ 00:09:55.446 00:23:48 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:55.446 00:23:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:55.446 00:23:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.446 00:23:48 -- common/autotest_common.sh@10 -- # set +x 00:09:55.446 ************************************ 00:09:55.446 START TEST json_config 00:09:55.446 ************************************ 00:09:55.446 00:23:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:55.446 00:23:49 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:55.446 00:23:49 -- nvmf/common.sh@7 -- # uname -s 00:09:55.446 00:23:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.446 00:23:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.446 00:23:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.446 00:23:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.446 00:23:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.446 00:23:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.446 00:23:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.446 00:23:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.446 00:23:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.446 00:23:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.446 00:23:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00322401-f902-4b1b-a98b-e1c60ac564fb 00:09:55.446 00:23:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00322401-f902-4b1b-a98b-e1c60ac564fb 00:09:55.446 00:23:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.446 00:23:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.446 00:23:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:55.446 00:23:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.446 00:23:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:55.446 00:23:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.446 00:23:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.446 00:23:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.446 00:23:49 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:55.446 00:23:49 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:55.446 00:23:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:55.446 00:23:49 -- paths/export.sh@5 -- # export PATH 00:09:55.446 00:23:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:55.446 00:23:49 -- nvmf/common.sh@47 -- # : 0 00:09:55.446 00:23:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:55.446 00:23:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:55.446 00:23:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.446 00:23:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.446 00:23:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.446 00:23:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:55.446 00:23:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:55.446 00:23:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:55.446 00:23:49 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:55.446 00:23:49 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:55.446 00:23:49 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:55.446 00:23:49 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:55.446 00:23:49 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:55.446 00:23:49 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:55.446 00:23:49 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:55.446 00:23:49 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:55.446 00:23:49 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:55.446 00:23:49 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:55.446 00:23:49 -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:55.446 00:23:49 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:55.446 00:23:49 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:55.446 00:23:49 -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:55.446 00:23:49 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:55.446 00:23:49 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:09:55.446 INFO: JSON configuration test init 00:09:55.446 00:23:49 -- json_config/json_config.sh@357 -- # json_config_test_init 00:09:55.446 00:23:49 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:09:55.446 00:23:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:55.446 00:23:49 -- common/autotest_common.sh@10 -- # set +x 00:09:55.446 00:23:49 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:09:55.446 00:23:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:55.446 00:23:49 -- common/autotest_common.sh@10 -- # set +x 00:09:55.446 00:23:49 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:09:55.446 00:23:49 -- json_config/common.sh@9 -- # local app=target 00:09:55.446 00:23:49 -- json_config/common.sh@10 -- # shift 00:09:55.446 00:23:49 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:55.446 00:23:49 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:55.446 00:23:49 -- json_config/common.sh@15 -- # local app_extra_params= 00:09:55.446 00:23:49 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:55.446 00:23:49 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:55.446 00:23:49 -- json_config/common.sh@22 -- # app_pid["$app"]=111209 00:09:55.446 Waiting for target to run... 00:09:55.446 00:23:49 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:55.446 00:23:49 -- json_config/common.sh@25 -- # waitforlisten 111209 /var/tmp/spdk_tgt.sock 00:09:55.446 00:23:49 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:55.446 00:23:49 -- common/autotest_common.sh@817 -- # '[' -z 111209 ']' 00:09:55.446 00:23:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:55.446 00:23:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:55.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:55.446 00:23:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:55.447 00:23:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:55.447 00:23:49 -- common/autotest_common.sh@10 -- # set +x 00:09:55.447 [2024-04-24 00:23:49.214044] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:09:55.447 [2024-04-24 00:23:49.214326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111209 ] 00:09:56.013 [2024-04-24 00:23:49.649478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.270 [2024-04-24 00:23:49.873524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.527 00:23:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:56.528 00:23:50 -- common/autotest_common.sh@850 -- # return 0 00:09:56.528 00:09:56.528 00:23:50 -- json_config/common.sh@26 -- # echo '' 00:09:56.528 00:23:50 -- json_config/json_config.sh@269 -- # create_accel_config 00:09:56.528 00:23:50 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:09:56.528 00:23:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:56.528 00:23:50 -- common/autotest_common.sh@10 -- # set +x 00:09:56.528 00:23:50 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:09:56.528 00:23:50 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:09:56.528 00:23:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:56.528 00:23:50 -- common/autotest_common.sh@10 -- # set +x 00:09:56.528 00:23:50 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:56.528 00:23:50 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:09:56.528 00:23:50 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:57.897 00:23:51 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:09:57.897 00:23:51 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:57.897 00:23:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:57.897 00:23:51 -- common/autotest_common.sh@10 -- # set +x 00:09:57.897 00:23:51 -- json_config/json_config.sh@45 -- # local ret=0 00:09:57.897 00:23:51 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:57.897 00:23:51 -- json_config/json_config.sh@46 -- # local enabled_types 00:09:57.897 00:23:51 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:57.897 00:23:51 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:57.897 00:23:51 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:57.897 00:23:51 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:57.897 00:23:51 -- json_config/json_config.sh@48 -- # local get_types 00:09:57.897 00:23:51 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:57.897 00:23:51 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:09:57.897 00:23:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:57.897 00:23:51 -- common/autotest_common.sh@10 -- # set +x 00:09:57.897 00:23:51 -- json_config/json_config.sh@55 -- # return 0 00:09:57.897 00:23:51 -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:09:57.897 00:23:51 -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:09:57.897 00:23:51 -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:09:57.897 00:23:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:57.897 00:23:51 -- common/autotest_common.sh@10 -- # set +x 00:09:57.897 00:23:51 -- json_config/json_config.sh@107 -- # expected_notifications=() 00:09:57.897 00:23:51 -- json_config/json_config.sh@107 -- # local expected_notifications 00:09:57.898 00:23:51 -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:09:57.898 00:23:51 -- json_config/json_config.sh@111 -- # get_notifications 00:09:58.155 00:23:51 -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:58.155 00:23:51 -- json_config/json_config.sh@61 -- # IFS=: 00:09:58.155 00:23:51 -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:58.155 00:23:51 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:58.155 00:23:51 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:58.155 00:23:51 -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:58.155 00:23:51 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:58.155 00:23:51 -- json_config/json_config.sh@61 -- # IFS=: 00:09:58.155 00:23:51 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:58.155 00:23:51 -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:09:58.155 00:23:51 -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:09:58.155 00:23:51 -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:58.155 00:23:51 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:58.412 Nvme0n1p0 Nvme0n1p1 00:09:58.412 00:23:52 -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:58.412 00:23:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:58.669 [2024-04-24 00:23:52.456025] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:58.669 [2024-04-24 00:23:52.456794] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:58.926 00:09:58.926 00:23:52 -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:58.926 00:23:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:59.183 Malloc3 00:09:59.183 00:23:52 -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:59.183 00:23:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:59.441 [2024-04-24 00:23:52.991904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:59.441 [2024-04-24 00:23:52.992412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.441 [2024-04-24 00:23:52.992867] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:59.441 [2024-04-24 00:23:52.993152] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.441 [2024-04-24 00:23:52.996175] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.441 [2024-04-24 00:23:52.996466] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:59.441 PTBdevFromMalloc3 00:09:59.441 00:23:53 -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:59.441 00:23:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:59.698 Null0 00:09:59.698 00:23:53 -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:59.698 00:23:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:59.956 Malloc0 00:09:59.956 00:23:53 -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:59.956 00:23:53 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:10:00.214 Malloc1 00:10:00.214 00:23:53 -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:10:00.214 00:23:53 -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:10:00.778 102400+0 records in 00:10:00.779 102400+0 records out 00:10:00.779 104857600 bytes (105 MB, 100 MiB) copied, 0.452754 s, 232 MB/s 00:10:00.779 00:23:54 -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:10:00.779 00:23:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:10:01.036 aio_disk 00:10:01.036 00:23:54 -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:10:01.036 00:23:54 -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:01.036 00:23:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:10:01.294 6f68d929-97c8-4be5-aef6-e5c6873376b8 00:10:01.294 00:23:54 -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:10:01.294 00:23:54 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:10:01.294 00:23:54 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:10:01.551 00:23:55 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:10:01.551 00:23:55 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:10:01.808 00:23:55 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:01.808 00:23:55 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:10:02.065 00:23:55 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:02.065 00:23:55 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:10:02.429 00:23:56 -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:10:02.429 00:23:56 -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:10:02.429 00:23:56 -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:0799d9e1-1860-4c4a-ba41-31a6aa0d16ee bdev_register:59c63d98-8973-4f03-aa2f-2a00cdab7898 bdev_register:38fc1da1-7bb2-496c-930c-e381330497fe bdev_register:e6d58a9d-c2c0-466d-a76d-0e57a13534ec 00:10:02.429 00:23:56 -- json_config/json_config.sh@67 -- # local events_to_check 00:10:02.429 00:23:56 -- json_config/json_config.sh@68 -- # local recorded_events 00:10:02.429 00:23:56 -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:10:02.429 00:23:56 -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:0799d9e1-1860-4c4a-ba41-31a6aa0d16ee bdev_register:59c63d98-8973-4f03-aa2f-2a00cdab7898 bdev_register:38fc1da1-7bb2-496c-930c-e381330497fe bdev_register:e6d58a9d-c2c0-466d-a76d-0e57a13534ec 00:10:02.429 00:23:56 -- json_config/json_config.sh@71 -- # sort 00:10:02.429 00:23:56 -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:10:02.429 00:23:56 -- json_config/json_config.sh@72 -- # get_notifications 00:10:02.429 00:23:56 -- json_config/json_config.sh@72 -- # sort 00:10:02.429 00:23:56 -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:10:02.429 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.429 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.429 00:23:56 -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:10:02.429 00:23:56 -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:10:02.429 00:23:56 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:0799d9e1-1860-4c4a-ba41-31a6aa0d16ee 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:59c63d98-8973-4f03-aa2f-2a00cdab7898 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:38fc1da1-7bb2-496c-930c-e381330497fe 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@62 -- # echo bdev_register:e6d58a9d-c2c0-466d-a76d-0e57a13534ec 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # IFS=: 00:10:02.689 00:23:56 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:10:02.689 00:23:56 -- json_config/json_config.sh@74 -- # [[ bdev_register:0799d9e1-1860-4c4a-ba41-31a6aa0d16ee bdev_register:38fc1da1-7bb2-496c-930c-e381330497fe bdev_register:59c63d98-8973-4f03-aa2f-2a00cdab7898 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e6d58a9d-c2c0-466d-a76d-0e57a13534ec != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\7\9\9\d\9\e\1\-\1\8\6\0\-\4\c\4\a\-\b\a\4\1\-\3\1\a\6\a\a\0\d\1\6\e\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\3\8\f\c\1\d\a\1\-\7\b\b\2\-\4\9\6\c\-\9\3\0\c\-\e\3\8\1\3\3\0\4\9\7\f\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\9\c\6\3\d\9\8\-\8\9\7\3\-\4\f\0\3\-\a\a\2\f\-\2\a\0\0\c\d\a\b\7\8\9\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\6\d\5\8\a\9\d\-\c\2\c\0\-\4\6\6\d\-\a\7\6\d\-\0\e\5\7\a\1\3\5\3\4\e\c ]] 00:10:02.689 00:23:56 -- json_config/json_config.sh@86 -- # cat 00:10:02.689 00:23:56 -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:0799d9e1-1860-4c4a-ba41-31a6aa0d16ee bdev_register:38fc1da1-7bb2-496c-930c-e381330497fe bdev_register:59c63d98-8973-4f03-aa2f-2a00cdab7898 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e6d58a9d-c2c0-466d-a76d-0e57a13534ec 00:10:02.689 Expected events matched: 00:10:02.689 bdev_register:0799d9e1-1860-4c4a-ba41-31a6aa0d16ee 00:10:02.689 bdev_register:38fc1da1-7bb2-496c-930c-e381330497fe 00:10:02.689 bdev_register:59c63d98-8973-4f03-aa2f-2a00cdab7898 00:10:02.689 bdev_register:Malloc0 00:10:02.689 bdev_register:Malloc0p0 00:10:02.689 bdev_register:Malloc0p1 00:10:02.689 bdev_register:Malloc0p2 00:10:02.689 bdev_register:Malloc1 00:10:02.689 bdev_register:Malloc3 00:10:02.689 bdev_register:Null0 00:10:02.689 bdev_register:Nvme0n1 00:10:02.689 bdev_register:Nvme0n1p0 00:10:02.689 bdev_register:Nvme0n1p1 00:10:02.689 bdev_register:PTBdevFromMalloc3 00:10:02.689 bdev_register:aio_disk 00:10:02.689 bdev_register:e6d58a9d-c2c0-466d-a76d-0e57a13534ec 00:10:02.689 00:23:56 -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:10:02.689 00:23:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:02.689 00:23:56 -- common/autotest_common.sh@10 -- # set +x 00:10:02.689 00:23:56 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:10:02.689 00:23:56 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:10:02.689 00:23:56 -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:10:02.689 00:23:56 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:10:02.689 00:23:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:02.689 00:23:56 -- common/autotest_common.sh@10 -- # set +x 00:10:02.689 00:23:56 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:10:02.689 00:23:56 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:02.689 00:23:56 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:02.949 MallocBdevForConfigChangeCheck 00:10:02.949 00:23:56 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:10:02.949 00:23:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:02.949 00:23:56 -- common/autotest_common.sh@10 -- # set +x 00:10:03.208 00:23:56 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:10:03.208 00:23:56 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:03.469 00:23:57 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:10:03.469 INFO: shutting down applications... 00:10:03.469 00:23:57 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:10:03.469 00:23:57 -- json_config/json_config.sh@368 -- # json_config_clear target 00:10:03.469 00:23:57 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:10:03.470 00:23:57 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:03.728 [2024-04-24 00:23:57.320017] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:10:03.989 Calling clear_vhost_scsi_subsystem 00:10:03.989 Calling clear_iscsi_subsystem 00:10:03.989 Calling clear_vhost_blk_subsystem 00:10:03.989 Calling clear_nbd_subsystem 00:10:03.989 Calling clear_nvmf_subsystem 00:10:03.989 Calling clear_bdev_subsystem 00:10:03.989 00:23:57 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:10:03.989 00:23:57 -- json_config/json_config.sh@343 -- # count=100 00:10:03.989 00:23:57 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:10:03.989 00:23:57 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:03.989 00:23:57 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:03.989 00:23:57 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:10:04.247 00:23:57 -- json_config/json_config.sh@345 -- # break 00:10:04.247 00:23:57 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:10:04.247 00:23:57 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:10:04.247 00:23:57 -- json_config/common.sh@31 -- # local app=target 00:10:04.247 00:23:57 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:04.247 00:23:57 -- json_config/common.sh@35 -- # [[ -n 111209 ]] 00:10:04.247 00:23:57 -- json_config/common.sh@38 -- # kill -SIGINT 111209 00:10:04.247 00:23:57 -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:04.247 00:23:57 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:04.247 00:23:57 -- json_config/common.sh@41 -- # kill -0 111209 00:10:04.247 00:23:57 -- json_config/common.sh@45 -- # sleep 0.5 00:10:04.813 00:23:58 -- json_config/common.sh@40 -- # (( i++ )) 00:10:04.813 00:23:58 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:04.813 00:23:58 -- json_config/common.sh@41 -- # kill -0 111209 00:10:04.813 00:23:58 -- json_config/common.sh@45 -- # sleep 0.5 00:10:05.380 00:23:58 -- json_config/common.sh@40 -- # (( i++ )) 00:10:05.380 00:23:58 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:05.380 00:23:58 -- json_config/common.sh@41 -- # kill -0 111209 00:10:05.380 00:23:58 -- json_config/common.sh@45 -- # sleep 0.5 00:10:05.637 00:23:59 -- json_config/common.sh@40 -- # (( i++ )) 00:10:05.637 00:23:59 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:05.637 00:23:59 -- json_config/common.sh@41 -- # kill -0 111209 00:10:05.637 00:23:59 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:05.637 00:23:59 -- json_config/common.sh@43 -- # break 00:10:05.637 00:23:59 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:05.637 00:23:59 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:05.637 SPDK target shutdown done 00:10:05.637 00:23:59 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:10:05.637 INFO: relaunching applications... 00:10:05.895 00:23:59 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:05.895 00:23:59 -- json_config/common.sh@9 -- # local app=target 00:10:05.895 00:23:59 -- json_config/common.sh@10 -- # shift 00:10:05.895 00:23:59 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:05.895 00:23:59 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:05.895 00:23:59 -- json_config/common.sh@15 -- # local app_extra_params= 00:10:05.895 00:23:59 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:05.895 00:23:59 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:05.895 00:23:59 -- json_config/common.sh@22 -- # app_pid["$app"]=111493 00:10:05.895 00:23:59 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:05.895 00:23:59 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:05.895 Waiting for target to run... 00:10:05.895 00:23:59 -- json_config/common.sh@25 -- # waitforlisten 111493 /var/tmp/spdk_tgt.sock 00:10:05.895 00:23:59 -- common/autotest_common.sh@817 -- # '[' -z 111493 ']' 00:10:05.895 00:23:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:05.895 00:23:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:05.895 00:23:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:05.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:05.895 00:23:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:05.895 00:23:59 -- common/autotest_common.sh@10 -- # set +x 00:10:05.895 [2024-04-24 00:23:59.526734] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:10:05.895 [2024-04-24 00:23:59.527966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111493 ] 00:10:06.459 [2024-04-24 00:23:59.953294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.459 [2024-04-24 00:24:00.224704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.426 [2024-04-24 00:24:01.145371] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:07.426 [2024-04-24 00:24:01.146067] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:07.426 [2024-04-24 00:24:01.153332] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:07.426 [2024-04-24 00:24:01.153611] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:07.426 [2024-04-24 00:24:01.161348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:07.426 [2024-04-24 00:24:01.161638] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:07.426 [2024-04-24 00:24:01.161921] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:07.684 [2024-04-24 00:24:01.253460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:07.684 [2024-04-24 00:24:01.253904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.684 [2024-04-24 00:24:01.254195] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:07.684 [2024-04-24 00:24:01.254450] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.685 [2024-04-24 00:24:01.255231] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.685 [2024-04-24 00:24:01.255527] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:08.623 00:24:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:08.623 00:24:02 -- common/autotest_common.sh@850 -- # return 0 00:10:08.623 00:24:02 -- json_config/common.sh@26 -- # echo '' 00:10:08.623 00:10:08.623 00:24:02 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:10:08.623 00:24:02 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:08.623 INFO: Checking if target configuration is the same... 00:10:08.623 00:24:02 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:08.623 00:24:02 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:10:08.623 00:24:02 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:08.623 + '[' 2 -ne 2 ']' 00:10:08.623 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:08.623 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:08.623 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:08.623 +++ basename /dev/fd/62 00:10:08.623 ++ mktemp /tmp/62.XXX 00:10:08.623 + tmp_file_1=/tmp/62.EU6 00:10:08.623 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:08.623 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:08.623 + tmp_file_2=/tmp/spdk_tgt_config.json.t7w 00:10:08.623 + ret=0 00:10:08.623 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:08.881 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:08.881 + diff -u /tmp/62.EU6 /tmp/spdk_tgt_config.json.t7w 00:10:08.881 INFO: JSON config files are the same 00:10:08.881 + echo 'INFO: JSON config files are the same' 00:10:08.881 + rm /tmp/62.EU6 /tmp/spdk_tgt_config.json.t7w 00:10:08.881 + exit 0 00:10:08.881 INFO: changing configuration and checking if this can be detected... 00:10:08.881 00:24:02 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:10:08.881 00:24:02 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:08.881 00:24:02 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:08.881 00:24:02 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:09.141 00:24:02 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:10:09.141 00:24:02 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:09.141 00:24:02 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:09.141 + '[' 2 -ne 2 ']' 00:10:09.141 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:09.141 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:09.141 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:09.141 +++ basename /dev/fd/62 00:10:09.141 ++ mktemp /tmp/62.XXX 00:10:09.141 + tmp_file_1=/tmp/62.hk9 00:10:09.141 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:09.141 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:09.141 + tmp_file_2=/tmp/spdk_tgt_config.json.ggz 00:10:09.141 + ret=0 00:10:09.141 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:09.399 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:09.657 + diff -u /tmp/62.hk9 /tmp/spdk_tgt_config.json.ggz 00:10:09.657 + ret=1 00:10:09.657 + echo '=== Start of file: /tmp/62.hk9 ===' 00:10:09.657 + cat /tmp/62.hk9 00:10:09.657 + echo '=== End of file: /tmp/62.hk9 ===' 00:10:09.657 + echo '' 00:10:09.657 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ggz ===' 00:10:09.657 + cat /tmp/spdk_tgt_config.json.ggz 00:10:09.657 + echo '=== End of file: /tmp/spdk_tgt_config.json.ggz ===' 00:10:09.657 + echo '' 00:10:09.657 + rm /tmp/62.hk9 /tmp/spdk_tgt_config.json.ggz 00:10:09.657 + exit 1 00:10:09.657 INFO: configuration change detected. 00:10:09.657 00:24:03 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:10:09.657 00:24:03 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:10:09.657 00:24:03 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:10:09.657 00:24:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:09.657 00:24:03 -- common/autotest_common.sh@10 -- # set +x 00:10:09.657 00:24:03 -- json_config/json_config.sh@307 -- # local ret=0 00:10:09.657 00:24:03 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:10:09.657 00:24:03 -- json_config/json_config.sh@317 -- # [[ -n 111493 ]] 00:10:09.657 00:24:03 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:10:09.657 00:24:03 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:10:09.657 00:24:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:09.657 00:24:03 -- common/autotest_common.sh@10 -- # set +x 00:10:09.657 00:24:03 -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:10:09.657 00:24:03 -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:09.657 00:24:03 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:09.914 00:24:03 -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:09.914 00:24:03 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:10.258 00:24:03 -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:10.258 00:24:03 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:10.516 00:24:04 -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:10.516 00:24:04 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:10.775 00:24:04 -- json_config/json_config.sh@193 -- # uname -s 00:10:10.775 00:24:04 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:10:10.775 00:24:04 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:10:10.775 00:24:04 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:10:10.775 00:24:04 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:10:10.775 00:24:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:10.775 00:24:04 -- common/autotest_common.sh@10 -- # set +x 00:10:10.775 00:24:04 -- json_config/json_config.sh@323 -- # killprocess 111493 00:10:10.775 00:24:04 -- common/autotest_common.sh@936 -- # '[' -z 111493 ']' 00:10:10.775 00:24:04 -- common/autotest_common.sh@940 -- # kill -0 111493 00:10:10.775 00:24:04 -- common/autotest_common.sh@941 -- # uname 00:10:10.775 00:24:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:10.775 00:24:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111493 00:10:10.775 00:24:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:10.775 00:24:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:10.775 00:24:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111493' 00:10:10.775 killing process with pid 111493 00:10:10.775 00:24:04 -- common/autotest_common.sh@955 -- # kill 111493 00:10:10.775 00:24:04 -- common/autotest_common.sh@960 -- # wait 111493 00:10:12.148 00:24:05 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:12.148 00:24:05 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:10:12.148 00:24:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:12.148 00:24:05 -- common/autotest_common.sh@10 -- # set +x 00:10:12.148 INFO: Success 00:10:12.148 00:24:05 -- json_config/json_config.sh@328 -- # return 0 00:10:12.148 00:24:05 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:10:12.148 ************************************ 00:10:12.148 END TEST json_config 00:10:12.148 ************************************ 00:10:12.148 00:10:12.148 real 0m16.650s 00:10:12.148 user 0m22.887s 00:10:12.148 sys 0m2.887s 00:10:12.148 00:24:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:12.148 00:24:05 -- common/autotest_common.sh@10 -- # set +x 00:10:12.148 00:24:05 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:12.148 00:24:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:12.148 00:24:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.148 00:24:05 -- common/autotest_common.sh@10 -- # set +x 00:10:12.148 ************************************ 00:10:12.148 START TEST json_config_extra_key 00:10:12.148 ************************************ 00:10:12.148 00:24:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:12.148 00:24:05 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.148 00:24:05 -- nvmf/common.sh@7 -- # uname -s 00:10:12.148 00:24:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.148 00:24:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.148 00:24:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.148 00:24:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.148 00:24:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.148 00:24:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.148 00:24:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.148 00:24:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.148 00:24:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.148 00:24:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.148 00:24:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:deb27f07-7fb6-4623-bcfa-8db53c09b333 00:10:12.148 00:24:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=deb27f07-7fb6-4623-bcfa-8db53c09b333 00:10:12.148 00:24:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.148 00:24:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.148 00:24:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:12.148 00:24:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.148 00:24:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.148 00:24:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.148 00:24:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.148 00:24:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.148 00:24:05 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:12.148 00:24:05 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:12.148 00:24:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:12.148 00:24:05 -- paths/export.sh@5 -- # export PATH 00:10:12.148 00:24:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:12.148 00:24:05 -- nvmf/common.sh@47 -- # : 0 00:10:12.148 00:24:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.148 00:24:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.148 00:24:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.148 00:24:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.148 00:24:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.148 00:24:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.148 00:24:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.148 00:24:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.149 00:24:05 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:12.149 00:24:05 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:12.149 00:24:05 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:12.149 00:24:05 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:12.149 00:24:05 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:12.149 00:24:05 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:12.149 00:24:05 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:12.149 00:24:05 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:12.149 00:24:05 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:12.149 00:24:05 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:12.149 00:24:05 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:12.149 INFO: launching applications... 00:10:12.149 00:24:05 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:12.149 00:24:05 -- json_config/common.sh@9 -- # local app=target 00:10:12.149 00:24:05 -- json_config/common.sh@10 -- # shift 00:10:12.149 00:24:05 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:12.149 00:24:05 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:12.149 00:24:05 -- json_config/common.sh@15 -- # local app_extra_params= 00:10:12.149 00:24:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:12.149 00:24:05 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:12.149 00:24:05 -- json_config/common.sh@22 -- # app_pid["$app"]=111698 00:10:12.149 00:24:05 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:12.149 00:24:05 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:12.149 Waiting for target to run... 00:10:12.149 00:24:05 -- json_config/common.sh@25 -- # waitforlisten 111698 /var/tmp/spdk_tgt.sock 00:10:12.149 00:24:05 -- common/autotest_common.sh@817 -- # '[' -z 111698 ']' 00:10:12.149 00:24:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:12.149 00:24:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:12.149 00:24:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:12.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:12.149 00:24:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:12.149 00:24:05 -- common/autotest_common.sh@10 -- # set +x 00:10:12.149 [2024-04-24 00:24:05.920961] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:10:12.149 [2024-04-24 00:24:05.921392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111698 ] 00:10:12.716 [2024-04-24 00:24:06.342841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.974 [2024-04-24 00:24:06.557294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.909 00:10:13.909 INFO: shutting down applications... 00:10:13.909 00:24:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:13.909 00:24:07 -- common/autotest_common.sh@850 -- # return 0 00:10:13.909 00:24:07 -- json_config/common.sh@26 -- # echo '' 00:10:13.909 00:24:07 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:13.909 00:24:07 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:13.909 00:24:07 -- json_config/common.sh@31 -- # local app=target 00:10:13.909 00:24:07 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:13.909 00:24:07 -- json_config/common.sh@35 -- # [[ -n 111698 ]] 00:10:13.909 00:24:07 -- json_config/common.sh@38 -- # kill -SIGINT 111698 00:10:13.909 00:24:07 -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:13.909 00:24:07 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:13.909 00:24:07 -- json_config/common.sh@41 -- # kill -0 111698 00:10:13.909 00:24:07 -- json_config/common.sh@45 -- # sleep 0.5 00:10:14.167 00:24:07 -- json_config/common.sh@40 -- # (( i++ )) 00:10:14.167 00:24:07 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:14.167 00:24:07 -- json_config/common.sh@41 -- # kill -0 111698 00:10:14.167 00:24:07 -- json_config/common.sh@45 -- # sleep 0.5 00:10:14.732 00:24:08 -- json_config/common.sh@40 -- # (( i++ )) 00:10:14.732 00:24:08 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:14.732 00:24:08 -- json_config/common.sh@41 -- # kill -0 111698 00:10:14.732 00:24:08 -- json_config/common.sh@45 -- # sleep 0.5 00:10:15.298 00:24:08 -- json_config/common.sh@40 -- # (( i++ )) 00:10:15.298 00:24:08 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:15.298 00:24:08 -- json_config/common.sh@41 -- # kill -0 111698 00:10:15.298 00:24:08 -- json_config/common.sh@45 -- # sleep 0.5 00:10:15.864 00:24:09 -- json_config/common.sh@40 -- # (( i++ )) 00:10:15.864 00:24:09 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:15.864 00:24:09 -- json_config/common.sh@41 -- # kill -0 111698 00:10:15.864 00:24:09 -- json_config/common.sh@45 -- # sleep 0.5 00:10:16.173 00:24:09 -- json_config/common.sh@40 -- # (( i++ )) 00:10:16.173 00:24:09 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:16.173 00:24:09 -- json_config/common.sh@41 -- # kill -0 111698 00:10:16.173 00:24:09 -- json_config/common.sh@45 -- # sleep 0.5 00:10:16.758 00:24:10 -- json_config/common.sh@40 -- # (( i++ )) 00:10:16.758 00:24:10 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:16.758 00:24:10 -- json_config/common.sh@41 -- # kill -0 111698 00:10:16.758 00:24:10 -- json_config/common.sh@45 -- # sleep 0.5 00:10:17.323 00:24:10 -- json_config/common.sh@40 -- # (( i++ )) 00:10:17.323 00:24:10 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:17.323 00:24:10 -- json_config/common.sh@41 -- # kill -0 111698 00:10:17.323 00:24:10 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:17.323 00:24:10 -- json_config/common.sh@43 -- # break 00:10:17.323 00:24:10 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:17.323 00:24:10 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:17.323 SPDK target shutdown done 00:10:17.323 00:24:10 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:17.323 Success 00:10:17.323 ************************************ 00:10:17.323 END TEST json_config_extra_key 00:10:17.323 ************************************ 00:10:17.323 00:10:17.323 real 0m5.224s 00:10:17.323 user 0m4.719s 00:10:17.323 sys 0m0.650s 00:10:17.323 00:24:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:17.323 00:24:10 -- common/autotest_common.sh@10 -- # set +x 00:10:17.323 00:24:10 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:17.323 00:24:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:17.323 00:24:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.323 00:24:10 -- common/autotest_common.sh@10 -- # set +x 00:10:17.323 ************************************ 00:10:17.323 START TEST alias_rpc 00:10:17.323 ************************************ 00:10:17.323 00:24:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:17.323 * Looking for test storage... 00:10:17.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:17.580 00:24:11 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:17.580 00:24:11 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=111825 00:10:17.580 00:24:11 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 111825 00:10:17.580 00:24:11 -- common/autotest_common.sh@817 -- # '[' -z 111825 ']' 00:10:17.580 00:24:11 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.580 00:24:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.580 00:24:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:17.580 00:24:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.580 00:24:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:17.580 00:24:11 -- common/autotest_common.sh@10 -- # set +x 00:10:17.580 [2024-04-24 00:24:11.206138] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:10:17.580 [2024-04-24 00:24:11.206326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111825 ] 00:10:17.836 [2024-04-24 00:24:11.388471] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.836 [2024-04-24 00:24:11.625118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.209 00:24:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:19.209 00:24:12 -- common/autotest_common.sh@850 -- # return 0 00:10:19.209 00:24:12 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:19.209 00:24:12 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 111825 00:10:19.209 00:24:12 -- common/autotest_common.sh@936 -- # '[' -z 111825 ']' 00:10:19.209 00:24:12 -- common/autotest_common.sh@940 -- # kill -0 111825 00:10:19.209 00:24:12 -- common/autotest_common.sh@941 -- # uname 00:10:19.209 00:24:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:19.209 00:24:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111825 00:10:19.209 00:24:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:19.209 00:24:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:19.209 00:24:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111825' 00:10:19.209 killing process with pid 111825 00:10:19.209 00:24:12 -- common/autotest_common.sh@955 -- # kill 111825 00:10:19.209 00:24:12 -- common/autotest_common.sh@960 -- # wait 111825 00:10:22.490 ************************************ 00:10:22.490 END TEST alias_rpc 00:10:22.490 ************************************ 00:10:22.491 00:10:22.491 real 0m4.519s 00:10:22.491 user 0m4.742s 00:10:22.491 sys 0m0.513s 00:10:22.491 00:24:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:22.491 00:24:15 -- common/autotest_common.sh@10 -- # set +x 00:10:22.491 00:24:15 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:10:22.491 00:24:15 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:22.491 00:24:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:22.491 00:24:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.491 00:24:15 -- common/autotest_common.sh@10 -- # set +x 00:10:22.491 ************************************ 00:10:22.491 START TEST spdkcli_tcp 00:10:22.491 ************************************ 00:10:22.491 00:24:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:22.491 * Looking for test storage... 00:10:22.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:22.491 00:24:15 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:22.491 00:24:15 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:22.491 00:24:15 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:22.491 00:24:15 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:22.491 00:24:15 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:22.491 00:24:15 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:22.491 00:24:15 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:22.491 00:24:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:22.491 00:24:15 -- common/autotest_common.sh@10 -- # set +x 00:10:22.491 00:24:15 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=111949 00:10:22.491 00:24:15 -- spdkcli/tcp.sh@27 -- # waitforlisten 111949 00:10:22.491 00:24:15 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:22.491 00:24:15 -- common/autotest_common.sh@817 -- # '[' -z 111949 ']' 00:10:22.491 00:24:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.491 00:24:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:22.491 00:24:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.491 00:24:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:22.491 00:24:15 -- common/autotest_common.sh@10 -- # set +x 00:10:22.491 [2024-04-24 00:24:15.842795] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:10:22.491 [2024-04-24 00:24:15.843412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111949 ] 00:10:22.491 [2024-04-24 00:24:16.007348] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:22.491 [2024-04-24 00:24:16.228710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.491 [2024-04-24 00:24:16.228711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.425 00:24:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:23.425 00:24:17 -- common/autotest_common.sh@850 -- # return 0 00:10:23.425 00:24:17 -- spdkcli/tcp.sh@31 -- # socat_pid=111971 00:10:23.425 00:24:17 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:23.425 00:24:17 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:23.684 [ 00:10:23.684 "spdk_get_version", 00:10:23.684 "rpc_get_methods", 00:10:23.684 "keyring_get_keys", 00:10:23.684 "trace_get_info", 00:10:23.684 "trace_get_tpoint_group_mask", 00:10:23.684 "trace_disable_tpoint_group", 00:10:23.684 "trace_enable_tpoint_group", 00:10:23.684 "trace_clear_tpoint_mask", 00:10:23.684 "trace_set_tpoint_mask", 00:10:23.684 "framework_get_pci_devices", 00:10:23.684 "framework_get_config", 00:10:23.684 "framework_get_subsystems", 00:10:23.684 "iobuf_get_stats", 00:10:23.684 "iobuf_set_options", 00:10:23.684 "sock_set_default_impl", 00:10:23.684 "sock_impl_set_options", 00:10:23.684 "sock_impl_get_options", 00:10:23.684 "vmd_rescan", 00:10:23.684 "vmd_remove_device", 00:10:23.684 "vmd_enable", 00:10:23.684 "accel_get_stats", 00:10:23.684 "accel_set_options", 00:10:23.684 "accel_set_driver", 00:10:23.684 "accel_crypto_key_destroy", 00:10:23.684 "accel_crypto_keys_get", 00:10:23.684 "accel_crypto_key_create", 00:10:23.684 "accel_assign_opc", 00:10:23.684 "accel_get_module_info", 00:10:23.684 "accel_get_opc_assignments", 00:10:23.684 "notify_get_notifications", 00:10:23.684 "notify_get_types", 00:10:23.684 "bdev_get_histogram", 00:10:23.684 "bdev_enable_histogram", 00:10:23.684 "bdev_set_qos_limit", 00:10:23.684 "bdev_set_qd_sampling_period", 00:10:23.684 "bdev_get_bdevs", 00:10:23.684 "bdev_reset_iostat", 00:10:23.684 "bdev_get_iostat", 00:10:23.684 "bdev_examine", 00:10:23.684 "bdev_wait_for_examine", 00:10:23.684 "bdev_set_options", 00:10:23.684 "scsi_get_devices", 00:10:23.684 "thread_set_cpumask", 00:10:23.684 "framework_get_scheduler", 00:10:23.684 "framework_set_scheduler", 00:10:23.684 "framework_get_reactors", 00:10:23.684 "thread_get_io_channels", 00:10:23.684 "thread_get_pollers", 00:10:23.684 "thread_get_stats", 00:10:23.684 "framework_monitor_context_switch", 00:10:23.684 "spdk_kill_instance", 00:10:23.684 "log_enable_timestamps", 00:10:23.684 "log_get_flags", 00:10:23.684 "log_clear_flag", 00:10:23.684 "log_set_flag", 00:10:23.684 "log_get_level", 00:10:23.684 "log_set_level", 00:10:23.684 "log_get_print_level", 00:10:23.684 "log_set_print_level", 00:10:23.684 "framework_enable_cpumask_locks", 00:10:23.684 "framework_disable_cpumask_locks", 00:10:23.684 "framework_wait_init", 00:10:23.684 "framework_start_init", 00:10:23.684 "virtio_blk_create_transport", 00:10:23.684 "virtio_blk_get_transports", 00:10:23.684 "vhost_controller_set_coalescing", 00:10:23.684 "vhost_get_controllers", 00:10:23.684 "vhost_delete_controller", 00:10:23.684 "vhost_create_blk_controller", 00:10:23.684 "vhost_scsi_controller_remove_target", 00:10:23.684 "vhost_scsi_controller_add_target", 00:10:23.684 "vhost_start_scsi_controller", 00:10:23.684 "vhost_create_scsi_controller", 00:10:23.684 "nbd_get_disks", 00:10:23.684 "nbd_stop_disk", 00:10:23.684 "nbd_start_disk", 00:10:23.684 "env_dpdk_get_mem_stats", 00:10:23.684 "nvmf_subsystem_get_listeners", 00:10:23.684 "nvmf_subsystem_get_qpairs", 00:10:23.684 "nvmf_subsystem_get_controllers", 00:10:23.684 "nvmf_get_stats", 00:10:23.684 "nvmf_get_transports", 00:10:23.684 "nvmf_create_transport", 00:10:23.684 "nvmf_get_targets", 00:10:23.684 "nvmf_delete_target", 00:10:23.684 "nvmf_create_target", 00:10:23.684 "nvmf_subsystem_allow_any_host", 00:10:23.684 "nvmf_subsystem_remove_host", 00:10:23.684 "nvmf_subsystem_add_host", 00:10:23.684 "nvmf_ns_remove_host", 00:10:23.684 "nvmf_ns_add_host", 00:10:23.684 "nvmf_subsystem_remove_ns", 00:10:23.684 "nvmf_subsystem_add_ns", 00:10:23.684 "nvmf_subsystem_listener_set_ana_state", 00:10:23.684 "nvmf_discovery_get_referrals", 00:10:23.684 "nvmf_discovery_remove_referral", 00:10:23.684 "nvmf_discovery_add_referral", 00:10:23.684 "nvmf_subsystem_remove_listener", 00:10:23.684 "nvmf_subsystem_add_listener", 00:10:23.684 "nvmf_delete_subsystem", 00:10:23.684 "nvmf_create_subsystem", 00:10:23.684 "nvmf_get_subsystems", 00:10:23.684 "nvmf_set_crdt", 00:10:23.684 "nvmf_set_config", 00:10:23.684 "nvmf_set_max_subsystems", 00:10:23.684 "iscsi_set_options", 00:10:23.684 "iscsi_get_auth_groups", 00:10:23.684 "iscsi_auth_group_remove_secret", 00:10:23.684 "iscsi_auth_group_add_secret", 00:10:23.684 "iscsi_delete_auth_group", 00:10:23.684 "iscsi_create_auth_group", 00:10:23.684 "iscsi_set_discovery_auth", 00:10:23.684 "iscsi_get_options", 00:10:23.684 "iscsi_target_node_request_logout", 00:10:23.684 "iscsi_target_node_set_redirect", 00:10:23.684 "iscsi_target_node_set_auth", 00:10:23.684 "iscsi_target_node_add_lun", 00:10:23.684 "iscsi_get_stats", 00:10:23.684 "iscsi_get_connections", 00:10:23.684 "iscsi_portal_group_set_auth", 00:10:23.684 "iscsi_start_portal_group", 00:10:23.684 "iscsi_delete_portal_group", 00:10:23.684 "iscsi_create_portal_group", 00:10:23.684 "iscsi_get_portal_groups", 00:10:23.684 "iscsi_delete_target_node", 00:10:23.684 "iscsi_target_node_remove_pg_ig_maps", 00:10:23.684 "iscsi_target_node_add_pg_ig_maps", 00:10:23.684 "iscsi_create_target_node", 00:10:23.684 "iscsi_get_target_nodes", 00:10:23.684 "iscsi_delete_initiator_group", 00:10:23.684 "iscsi_initiator_group_remove_initiators", 00:10:23.684 "iscsi_initiator_group_add_initiators", 00:10:23.684 "iscsi_create_initiator_group", 00:10:23.684 "iscsi_get_initiator_groups", 00:10:23.684 "keyring_linux_set_options", 00:10:23.684 "keyring_file_remove_key", 00:10:23.684 "keyring_file_add_key", 00:10:23.684 "iaa_scan_accel_module", 00:10:23.684 "dsa_scan_accel_module", 00:10:23.684 "ioat_scan_accel_module", 00:10:23.684 "accel_error_inject_error", 00:10:23.684 "bdev_iscsi_delete", 00:10:23.684 "bdev_iscsi_create", 00:10:23.684 "bdev_iscsi_set_options", 00:10:23.684 "bdev_virtio_attach_controller", 00:10:23.684 "bdev_virtio_scsi_get_devices", 00:10:23.684 "bdev_virtio_detach_controller", 00:10:23.684 "bdev_virtio_blk_set_hotplug", 00:10:23.684 "bdev_ftl_set_property", 00:10:23.684 "bdev_ftl_get_properties", 00:10:23.684 "bdev_ftl_get_stats", 00:10:23.684 "bdev_ftl_unmap", 00:10:23.684 "bdev_ftl_unload", 00:10:23.684 "bdev_ftl_delete", 00:10:23.684 "bdev_ftl_load", 00:10:23.684 "bdev_ftl_create", 00:10:23.684 "bdev_aio_delete", 00:10:23.685 "bdev_aio_rescan", 00:10:23.685 "bdev_aio_create", 00:10:23.685 "blobfs_create", 00:10:23.685 "blobfs_detect", 00:10:23.685 "blobfs_set_cache_size", 00:10:23.685 "bdev_zone_block_delete", 00:10:23.685 "bdev_zone_block_create", 00:10:23.685 "bdev_delay_delete", 00:10:23.685 "bdev_delay_create", 00:10:23.685 "bdev_delay_update_latency", 00:10:23.685 "bdev_split_delete", 00:10:23.685 "bdev_split_create", 00:10:23.685 "bdev_error_inject_error", 00:10:23.685 "bdev_error_delete", 00:10:23.685 "bdev_error_create", 00:10:23.685 "bdev_raid_set_options", 00:10:23.685 "bdev_raid_remove_base_bdev", 00:10:23.685 "bdev_raid_add_base_bdev", 00:10:23.685 "bdev_raid_delete", 00:10:23.685 "bdev_raid_create", 00:10:23.685 "bdev_raid_get_bdevs", 00:10:23.685 "bdev_lvol_grow_lvstore", 00:10:23.685 "bdev_lvol_get_lvols", 00:10:23.685 "bdev_lvol_get_lvstores", 00:10:23.685 "bdev_lvol_delete", 00:10:23.685 "bdev_lvol_set_read_only", 00:10:23.685 "bdev_lvol_resize", 00:10:23.685 "bdev_lvol_decouple_parent", 00:10:23.685 "bdev_lvol_inflate", 00:10:23.685 "bdev_lvol_rename", 00:10:23.685 "bdev_lvol_clone_bdev", 00:10:23.685 "bdev_lvol_clone", 00:10:23.685 "bdev_lvol_snapshot", 00:10:23.685 "bdev_lvol_create", 00:10:23.685 "bdev_lvol_delete_lvstore", 00:10:23.685 "bdev_lvol_rename_lvstore", 00:10:23.685 "bdev_lvol_create_lvstore", 00:10:23.685 "bdev_passthru_delete", 00:10:23.685 "bdev_passthru_create", 00:10:23.685 "bdev_nvme_cuse_unregister", 00:10:23.685 "bdev_nvme_cuse_register", 00:10:23.685 "bdev_opal_new_user", 00:10:23.685 "bdev_opal_set_lock_state", 00:10:23.685 "bdev_opal_delete", 00:10:23.685 "bdev_opal_get_info", 00:10:23.685 "bdev_opal_create", 00:10:23.685 "bdev_nvme_opal_revert", 00:10:23.685 "bdev_nvme_opal_init", 00:10:23.685 "bdev_nvme_send_cmd", 00:10:23.685 "bdev_nvme_get_path_iostat", 00:10:23.685 "bdev_nvme_get_mdns_discovery_info", 00:10:23.685 "bdev_nvme_stop_mdns_discovery", 00:10:23.685 "bdev_nvme_start_mdns_discovery", 00:10:23.685 "bdev_nvme_set_multipath_policy", 00:10:23.685 "bdev_nvme_set_preferred_path", 00:10:23.685 "bdev_nvme_get_io_paths", 00:10:23.685 "bdev_nvme_remove_error_injection", 00:10:23.685 "bdev_nvme_add_error_injection", 00:10:23.685 "bdev_nvme_get_discovery_info", 00:10:23.685 "bdev_nvme_stop_discovery", 00:10:23.685 "bdev_nvme_start_discovery", 00:10:23.685 "bdev_nvme_get_controller_health_info", 00:10:23.685 "bdev_nvme_disable_controller", 00:10:23.685 "bdev_nvme_enable_controller", 00:10:23.685 "bdev_nvme_reset_controller", 00:10:23.685 "bdev_nvme_get_transport_statistics", 00:10:23.685 "bdev_nvme_apply_firmware", 00:10:23.685 "bdev_nvme_detach_controller", 00:10:23.685 "bdev_nvme_get_controllers", 00:10:23.685 "bdev_nvme_attach_controller", 00:10:23.685 "bdev_nvme_set_hotplug", 00:10:23.685 "bdev_nvme_set_options", 00:10:23.685 "bdev_null_resize", 00:10:23.685 "bdev_null_delete", 00:10:23.685 "bdev_null_create", 00:10:23.685 "bdev_malloc_delete", 00:10:23.685 "bdev_malloc_create" 00:10:23.685 ] 00:10:23.685 00:24:17 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:23.685 00:24:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:23.685 00:24:17 -- common/autotest_common.sh@10 -- # set +x 00:10:23.942 00:24:17 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:23.942 00:24:17 -- spdkcli/tcp.sh@38 -- # killprocess 111949 00:10:23.942 00:24:17 -- common/autotest_common.sh@936 -- # '[' -z 111949 ']' 00:10:23.942 00:24:17 -- common/autotest_common.sh@940 -- # kill -0 111949 00:10:23.942 00:24:17 -- common/autotest_common.sh@941 -- # uname 00:10:23.942 00:24:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:23.942 00:24:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111949 00:10:23.942 00:24:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:23.942 00:24:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:23.943 00:24:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111949' 00:10:23.943 killing process with pid 111949 00:10:23.943 00:24:17 -- common/autotest_common.sh@955 -- # kill 111949 00:10:23.943 00:24:17 -- common/autotest_common.sh@960 -- # wait 111949 00:10:26.473 00:10:26.473 real 0m4.514s 00:10:26.473 user 0m8.218s 00:10:26.473 sys 0m0.570s 00:10:26.473 00:24:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:26.473 00:24:20 -- common/autotest_common.sh@10 -- # set +x 00:10:26.473 ************************************ 00:10:26.473 END TEST spdkcli_tcp 00:10:26.473 ************************************ 00:10:26.473 00:24:20 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:26.473 00:24:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:26.473 00:24:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:26.473 00:24:20 -- common/autotest_common.sh@10 -- # set +x 00:10:26.731 ************************************ 00:10:26.731 START TEST dpdk_mem_utility 00:10:26.731 ************************************ 00:10:26.731 00:24:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:26.731 * Looking for test storage... 00:10:26.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:26.731 00:24:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:26.731 00:24:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=112080 00:10:26.731 00:24:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:26.731 00:24:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 112080 00:10:26.731 00:24:20 -- common/autotest_common.sh@817 -- # '[' -z 112080 ']' 00:10:26.731 00:24:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.731 00:24:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:26.731 00:24:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.731 00:24:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:26.731 00:24:20 -- common/autotest_common.sh@10 -- # set +x 00:10:26.731 [2024-04-24 00:24:20.474884] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:10:26.731 [2024-04-24 00:24:20.475080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112080 ] 00:10:26.990 [2024-04-24 00:24:20.651476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.248 [2024-04-24 00:24:20.874029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.213 00:24:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:28.213 00:24:21 -- common/autotest_common.sh@850 -- # return 0 00:10:28.213 00:24:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:28.213 00:24:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:28.213 00:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:28.213 00:24:21 -- common/autotest_common.sh@10 -- # set +x 00:10:28.213 { 00:10:28.213 "filename": "/tmp/spdk_mem_dump.txt" 00:10:28.213 } 00:10:28.213 00:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:28.213 00:24:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:28.213 DPDK memory size 820.000000 MiB in 1 heap(s) 00:10:28.213 1 heaps totaling size 820.000000 MiB 00:10:28.213 size: 820.000000 MiB heap id: 0 00:10:28.213 end heaps---------- 00:10:28.213 8 mempools totaling size 598.116089 MiB 00:10:28.213 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:28.213 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:28.213 size: 84.521057 MiB name: bdev_io_112080 00:10:28.213 size: 51.011292 MiB name: evtpool_112080 00:10:28.213 size: 50.003479 MiB name: msgpool_112080 00:10:28.213 size: 21.763794 MiB name: PDU_Pool 00:10:28.213 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:28.213 size: 0.026123 MiB name: Session_Pool 00:10:28.213 end mempools------- 00:10:28.213 6 memzones totaling size 4.142822 MiB 00:10:28.213 size: 1.000366 MiB name: RG_ring_0_112080 00:10:28.214 size: 1.000366 MiB name: RG_ring_1_112080 00:10:28.214 size: 1.000366 MiB name: RG_ring_4_112080 00:10:28.214 size: 1.000366 MiB name: RG_ring_5_112080 00:10:28.214 size: 0.125366 MiB name: RG_ring_2_112080 00:10:28.214 size: 0.015991 MiB name: RG_ring_3_112080 00:10:28.214 end memzones------- 00:10:28.214 00:24:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:28.214 heap id: 0 total size: 820.000000 MiB number of busy elements: 223 number of free elements: 18 00:10:28.214 list of free elements. size: 18.468506 MiB 00:10:28.214 element at address: 0x200000400000 with size: 1.999451 MiB 00:10:28.214 element at address: 0x200000800000 with size: 1.996887 MiB 00:10:28.214 element at address: 0x200007000000 with size: 1.995972 MiB 00:10:28.214 element at address: 0x20000b200000 with size: 1.995972 MiB 00:10:28.214 element at address: 0x200019100040 with size: 0.999939 MiB 00:10:28.214 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:28.214 element at address: 0x200019600000 with size: 0.999329 MiB 00:10:28.214 element at address: 0x200003e00000 with size: 0.996094 MiB 00:10:28.214 element at address: 0x200032200000 with size: 0.994324 MiB 00:10:28.214 element at address: 0x200018e00000 with size: 0.959656 MiB 00:10:28.214 element at address: 0x200019900040 with size: 0.937256 MiB 00:10:28.214 element at address: 0x200000200000 with size: 0.834106 MiB 00:10:28.214 element at address: 0x20001b000000 with size: 0.562195 MiB 00:10:28.214 element at address: 0x200019200000 with size: 0.489197 MiB 00:10:28.214 element at address: 0x200019a00000 with size: 0.485413 MiB 00:10:28.214 element at address: 0x200013800000 with size: 0.469116 MiB 00:10:28.214 element at address: 0x200028400000 with size: 0.399719 MiB 00:10:28.214 element at address: 0x200003a00000 with size: 0.353943 MiB 00:10:28.214 list of standard malloc elements. size: 199.267090 MiB 00:10:28.214 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:10:28.214 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:10:28.214 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:10:28.214 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:28.214 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:28.214 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:28.214 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:10:28.214 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:28.214 element at address: 0x200003aff180 with size: 0.002197 MiB 00:10:28.214 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:10:28.214 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:10:28.214 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:10:28.214 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:28.214 element at address: 0x200003aff080 with size: 0.000244 MiB 00:10:28.214 element at address: 0x200003affa80 with size: 0.000244 MiB 00:10:28.214 element at address: 0x200003eff000 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x200013878180 with size: 0.000244 MiB 00:10:28.214 element at address: 0x200013878280 with size: 0.000244 MiB 00:10:28.214 element at address: 0x200013878380 with size: 0.000244 MiB 00:10:28.214 element at address: 0x200013878480 with size: 0.000244 MiB 00:10:28.214 element at address: 0x200013878580 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:28.214 element at address: 0x200019abc680 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:10:28.214 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:10:28.215 element at address: 0x200028466540 with size: 0.000244 MiB 00:10:28.215 element at address: 0x200028466640 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846d300 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846d580 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846d680 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846d780 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846d880 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846d980 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846da80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846db80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846de80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846df80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846e080 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846e180 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846e280 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846e380 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846e480 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846e580 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846e680 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846e780 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846e880 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846e980 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846f080 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846f180 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846f280 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846f380 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846f480 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846f580 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846f680 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846f780 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846f880 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846f980 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:10:28.215 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:10:28.215 list of memzone associated elements. size: 602.264404 MiB 00:10:28.215 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:10:28.215 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:28.215 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:10:28.215 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:28.215 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:10:28.215 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_112080_0 00:10:28.215 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:10:28.215 associated memzone info: size: 48.002930 MiB name: MP_evtpool_112080_0 00:10:28.215 element at address: 0x200003fff340 with size: 48.003113 MiB 00:10:28.215 associated memzone info: size: 48.002930 MiB name: MP_msgpool_112080_0 00:10:28.215 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:10:28.215 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:28.215 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:10:28.215 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:28.215 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:10:28.215 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_112080 00:10:28.215 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:10:28.215 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_112080 00:10:28.215 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:28.215 associated memzone info: size: 1.007996 MiB name: MP_evtpool_112080 00:10:28.215 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:28.215 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:28.215 element at address: 0x200019abc780 with size: 1.008179 MiB 00:10:28.215 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:28.215 element at address: 0x200018efde00 with size: 1.008179 MiB 00:10:28.215 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:28.215 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:10:28.215 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:28.215 element at address: 0x200003eff100 with size: 1.000549 MiB 00:10:28.215 associated memzone info: size: 1.000366 MiB name: RG_ring_0_112080 00:10:28.215 element at address: 0x200003affb80 with size: 1.000549 MiB 00:10:28.215 associated memzone info: size: 1.000366 MiB name: RG_ring_1_112080 00:10:28.215 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:10:28.215 associated memzone info: size: 1.000366 MiB name: RG_ring_4_112080 00:10:28.215 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:10:28.215 associated memzone info: size: 1.000366 MiB name: RG_ring_5_112080 00:10:28.215 element at address: 0x200003a5a9c0 with size: 0.500549 MiB 00:10:28.215 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_112080 00:10:28.215 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:10:28.215 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:28.215 element at address: 0x200013878680 with size: 0.500549 MiB 00:10:28.215 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:28.215 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:10:28.215 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:28.215 element at address: 0x200003adee40 with size: 0.125549 MiB 00:10:28.215 associated memzone info: size: 0.125366 MiB name: RG_ring_2_112080 00:10:28.215 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:10:28.215 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:28.215 element at address: 0x200028466740 with size: 0.023804 MiB 00:10:28.215 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:28.215 element at address: 0x200003adac00 with size: 0.016174 MiB 00:10:28.215 associated memzone info: size: 0.015991 MiB name: RG_ring_3_112080 00:10:28.216 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:10:28.216 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:28.216 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:10:28.216 associated memzone info: size: 0.000183 MiB name: MP_msgpool_112080 00:10:28.216 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:10:28.216 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_112080 00:10:28.216 element at address: 0x20002846d400 with size: 0.000366 MiB 00:10:28.216 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:28.216 00:24:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:28.216 00:24:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 112080 00:10:28.216 00:24:21 -- common/autotest_common.sh@936 -- # '[' -z 112080 ']' 00:10:28.216 00:24:21 -- common/autotest_common.sh@940 -- # kill -0 112080 00:10:28.216 00:24:21 -- common/autotest_common.sh@941 -- # uname 00:10:28.216 00:24:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:28.216 00:24:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112080 00:10:28.474 00:24:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:28.474 00:24:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:28.474 00:24:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112080' 00:10:28.474 killing process with pid 112080 00:10:28.474 00:24:22 -- common/autotest_common.sh@955 -- # kill 112080 00:10:28.474 00:24:22 -- common/autotest_common.sh@960 -- # wait 112080 00:10:31.004 00:10:31.004 real 0m4.370s 00:10:31.004 user 0m4.435s 00:10:31.004 sys 0m0.586s 00:10:31.004 00:24:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:31.004 00:24:24 -- common/autotest_common.sh@10 -- # set +x 00:10:31.004 ************************************ 00:10:31.004 END TEST dpdk_mem_utility 00:10:31.004 ************************************ 00:10:31.004 00:24:24 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:31.004 00:24:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:31.004 00:24:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:31.004 00:24:24 -- common/autotest_common.sh@10 -- # set +x 00:10:31.004 ************************************ 00:10:31.004 START TEST event 00:10:31.004 ************************************ 00:10:31.004 00:24:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:31.262 * Looking for test storage... 00:10:31.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:31.262 00:24:24 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:31.262 00:24:24 -- bdev/nbd_common.sh@6 -- # set -e 00:10:31.262 00:24:24 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:31.262 00:24:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:31.262 00:24:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:31.262 00:24:24 -- common/autotest_common.sh@10 -- # set +x 00:10:31.262 ************************************ 00:10:31.262 START TEST event_perf 00:10:31.262 ************************************ 00:10:31.262 00:24:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:31.262 Running I/O for 1 seconds...[2024-04-24 00:24:24.920976] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:10:31.262 [2024-04-24 00:24:24.921105] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112212 ] 00:10:31.520 [2024-04-24 00:24:25.100958] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.778 [2024-04-24 00:24:25.319259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.778 [2024-04-24 00:24:25.319440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.778 [2024-04-24 00:24:25.319376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.778 [2024-04-24 00:24:25.319442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.152 Running I/O for 1 seconds... 00:10:33.152 lcore 0: 141855 00:10:33.152 lcore 1: 141858 00:10:33.152 lcore 2: 141861 00:10:33.152 lcore 3: 141864 00:10:33.152 done. 00:10:33.152 00:10:33.152 real 0m1.899s 00:10:33.152 user 0m4.665s 00:10:33.152 sys 0m0.132s 00:10:33.152 00:24:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:33.152 00:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:33.152 ************************************ 00:10:33.152 END TEST event_perf 00:10:33.152 ************************************ 00:10:33.152 00:24:26 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:33.152 00:24:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:33.152 00:24:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:33.152 00:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:33.152 ************************************ 00:10:33.152 START TEST event_reactor 00:10:33.152 ************************************ 00:10:33.152 00:24:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:33.152 [2024-04-24 00:24:26.936599] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:10:33.152 [2024-04-24 00:24:26.936815] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112269 ] 00:10:33.411 [2024-04-24 00:24:27.117609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.669 [2024-04-24 00:24:27.390589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.086 test_start 00:10:35.086 oneshot 00:10:35.086 tick 100 00:10:35.086 tick 100 00:10:35.086 tick 250 00:10:35.086 tick 100 00:10:35.086 tick 100 00:10:35.086 tick 100 00:10:35.086 tick 250 00:10:35.086 tick 500 00:10:35.086 tick 100 00:10:35.086 tick 100 00:10:35.086 tick 250 00:10:35.086 tick 100 00:10:35.086 tick 100 00:10:35.086 test_end 00:10:35.086 00:10:35.086 real 0m1.936s 00:10:35.087 user 0m1.704s 00:10:35.087 sys 0m0.133s 00:10:35.087 00:24:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:35.087 00:24:28 -- common/autotest_common.sh@10 -- # set +x 00:10:35.087 ************************************ 00:10:35.087 END TEST event_reactor 00:10:35.087 ************************************ 00:10:35.087 00:24:28 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:35.087 00:24:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:35.087 00:24:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:35.087 00:24:28 -- common/autotest_common.sh@10 -- # set +x 00:10:35.344 ************************************ 00:10:35.344 START TEST event_reactor_perf 00:10:35.344 ************************************ 00:10:35.344 00:24:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:35.344 [2024-04-24 00:24:28.973682] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:10:35.344 [2024-04-24 00:24:28.973909] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112323 ] 00:10:35.602 [2024-04-24 00:24:29.149459] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.602 [2024-04-24 00:24:29.378579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.502 test_start 00:10:37.502 test_end 00:10:37.502 Performance: 362506 events per second 00:10:37.502 00:10:37.502 real 0m1.906s 00:10:37.502 user 0m1.647s 00:10:37.502 sys 0m0.160s 00:10:37.502 00:24:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:37.502 00:24:30 -- common/autotest_common.sh@10 -- # set +x 00:10:37.502 ************************************ 00:10:37.502 END TEST event_reactor_perf 00:10:37.502 ************************************ 00:10:37.502 00:24:30 -- event/event.sh@49 -- # uname -s 00:10:37.502 00:24:30 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:37.502 00:24:30 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:37.502 00:24:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:37.502 00:24:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:37.502 00:24:30 -- common/autotest_common.sh@10 -- # set +x 00:10:37.502 ************************************ 00:10:37.502 START TEST event_scheduler 00:10:37.502 ************************************ 00:10:37.502 00:24:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:37.502 * Looking for test storage... 00:10:37.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:37.502 00:24:31 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:37.502 00:24:31 -- scheduler/scheduler.sh@35 -- # scheduler_pid=112408 00:10:37.502 00:24:31 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:37.502 00:24:31 -- scheduler/scheduler.sh@37 -- # waitforlisten 112408 00:10:37.502 00:24:31 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:37.502 00:24:31 -- common/autotest_common.sh@817 -- # '[' -z 112408 ']' 00:10:37.502 00:24:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.502 00:24:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:37.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.502 00:24:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.502 00:24:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:37.502 00:24:31 -- common/autotest_common.sh@10 -- # set +x 00:10:37.502 [2024-04-24 00:24:31.144576] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:10:37.502 [2024-04-24 00:24:31.144788] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112408 ] 00:10:37.761 [2024-04-24 00:24:31.358133] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.019 [2024-04-24 00:24:31.612454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.019 [2024-04-24 00:24:31.612602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.019 [2024-04-24 00:24:31.612727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.019 [2024-04-24 00:24:31.612730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.585 00:24:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:38.585 00:24:32 -- common/autotest_common.sh@850 -- # return 0 00:10:38.585 00:24:32 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:38.585 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:38.585 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:38.585 POWER: Env isn't set yet! 00:10:38.585 POWER: Attempting to initialise ACPI cpufreq power management... 00:10:38.585 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:38.585 POWER: Cannot set governor of lcore 0 to userspace 00:10:38.585 POWER: Attempting to initialise PSTAT power management... 00:10:38.585 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:38.585 POWER: Cannot set governor of lcore 0 to performance 00:10:38.585 POWER: Attempting to initialise AMD PSTATE power management... 00:10:38.585 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:38.585 POWER: Cannot set governor of lcore 0 to userspace 00:10:38.585 POWER: Attempting to initialise CPPC power management... 00:10:38.585 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:38.585 POWER: Cannot set governor of lcore 0 to userspace 00:10:38.585 POWER: Attempting to initialise VM power management... 00:10:38.585 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:38.585 POWER: Unable to set Power Management Environment for lcore 0 00:10:38.585 [2024-04-24 00:24:32.129601] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:10:38.585 [2024-04-24 00:24:32.129739] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:10:38.585 [2024-04-24 00:24:32.129850] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:10:38.585 00:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:38.585 00:24:32 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:38.585 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:38.585 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 [2024-04-24 00:24:32.531511] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:38.843 00:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:38.843 00:24:32 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:38.843 00:24:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:38.843 00:24:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:38.843 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 ************************************ 00:10:38.843 START TEST scheduler_create_thread 00:10:38.843 ************************************ 00:10:38.843 00:24:32 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:10:38.843 00:24:32 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:38.843 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:38.843 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 2 00:10:38.843 00:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:38.843 00:24:32 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:38.843 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:38.843 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 3 00:10:38.843 00:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:38.843 00:24:32 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:38.843 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:38.843 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 4 00:10:38.843 00:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:38.843 00:24:32 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:38.843 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:38.843 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 5 00:10:38.843 00:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:38.843 00:24:32 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:38.843 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:38.843 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 6 00:10:38.843 00:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:38.843 00:24:32 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:38.843 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:38.843 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:38.843 7 00:10:38.843 00:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:38.843 00:24:32 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:38.843 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:38.843 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:39.099 8 00:10:39.099 00:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.099 00:24:32 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:39.099 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.099 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:39.099 9 00:10:39.099 00:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.099 00:24:32 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:39.099 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.099 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:39.099 10 00:10:39.099 00:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.099 00:24:32 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:39.099 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.099 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:39.099 00:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.099 00:24:32 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:39.099 00:24:32 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:39.099 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.100 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:39.100 00:24:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:39.100 00:24:32 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:39.100 00:24:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:39.100 00:24:32 -- common/autotest_common.sh@10 -- # set +x 00:10:40.032 00:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.032 00:24:33 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:40.032 00:24:33 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:40.032 00:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.032 00:24:33 -- common/autotest_common.sh@10 -- # set +x 00:10:40.965 00:24:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.965 00:10:40.965 real 0m2.155s 00:10:40.965 user 0m0.024s 00:10:40.965 sys 0m0.001s 00:10:40.965 00:24:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:40.965 ************************************ 00:10:40.965 00:24:34 -- common/autotest_common.sh@10 -- # set +x 00:10:40.965 END TEST scheduler_create_thread 00:10:40.965 ************************************ 00:10:41.222 00:24:34 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:41.222 00:24:34 -- scheduler/scheduler.sh@46 -- # killprocess 112408 00:10:41.222 00:24:34 -- common/autotest_common.sh@936 -- # '[' -z 112408 ']' 00:10:41.222 00:24:34 -- common/autotest_common.sh@940 -- # kill -0 112408 00:10:41.222 00:24:34 -- common/autotest_common.sh@941 -- # uname 00:10:41.222 00:24:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:41.222 00:24:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112408 00:10:41.222 00:24:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:41.222 killing process with pid 112408 00:10:41.222 00:24:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:41.222 00:24:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112408' 00:10:41.222 00:24:34 -- common/autotest_common.sh@955 -- # kill 112408 00:10:41.222 00:24:34 -- common/autotest_common.sh@960 -- # wait 112408 00:10:41.479 [2024-04-24 00:24:35.208865] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:43.381 00:10:43.381 real 0m5.902s 00:10:43.381 user 0m10.214s 00:10:43.381 sys 0m0.499s 00:10:43.381 00:24:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:43.381 00:24:36 -- common/autotest_common.sh@10 -- # set +x 00:10:43.381 ************************************ 00:10:43.381 END TEST event_scheduler 00:10:43.381 ************************************ 00:10:43.381 00:24:36 -- event/event.sh@51 -- # modprobe -n nbd 00:10:43.381 00:24:36 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:43.381 00:24:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:43.381 00:24:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:43.381 00:24:36 -- common/autotest_common.sh@10 -- # set +x 00:10:43.381 ************************************ 00:10:43.381 START TEST app_repeat 00:10:43.381 ************************************ 00:10:43.381 00:24:36 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:10:43.381 00:24:36 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.381 00:24:36 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.381 00:24:36 -- event/event.sh@13 -- # local nbd_list 00:10:43.381 00:24:36 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:43.381 00:24:36 -- event/event.sh@14 -- # local bdev_list 00:10:43.381 00:24:36 -- event/event.sh@15 -- # local repeat_times=4 00:10:43.381 00:24:36 -- event/event.sh@17 -- # modprobe nbd 00:10:43.381 00:24:36 -- event/event.sh@19 -- # repeat_pid=112547 00:10:43.381 00:24:36 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:43.381 00:24:36 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:43.381 Process app_repeat pid: 112547 00:10:43.381 00:24:36 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 112547' 00:10:43.381 00:24:36 -- event/event.sh@23 -- # for i in {0..2} 00:10:43.381 spdk_app_start Round 0 00:10:43.381 00:24:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:43.381 00:24:36 -- event/event.sh@25 -- # waitforlisten 112547 /var/tmp/spdk-nbd.sock 00:10:43.381 00:24:36 -- common/autotest_common.sh@817 -- # '[' -z 112547 ']' 00:10:43.381 00:24:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:43.381 00:24:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:43.381 00:24:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:43.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:43.381 00:24:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:43.381 00:24:36 -- common/autotest_common.sh@10 -- # set +x 00:10:43.381 [2024-04-24 00:24:37.034663] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:10:43.381 [2024-04-24 00:24:37.034940] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112547 ] 00:10:43.638 [2024-04-24 00:24:37.218310] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:43.896 [2024-04-24 00:24:37.489070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.896 [2024-04-24 00:24:37.489084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.462 00:24:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:44.462 00:24:38 -- common/autotest_common.sh@850 -- # return 0 00:10:44.462 00:24:38 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:44.738 Malloc0 00:10:44.738 00:24:38 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:45.324 Malloc1 00:10:45.324 00:24:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@12 -- # local i 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:45.324 00:24:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:45.583 /dev/nbd0 00:10:45.583 00:24:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:45.583 00:24:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:45.583 00:24:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:10:45.583 00:24:39 -- common/autotest_common.sh@855 -- # local i 00:10:45.583 00:24:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:45.583 00:24:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:45.583 00:24:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:10:45.583 00:24:39 -- common/autotest_common.sh@859 -- # break 00:10:45.583 00:24:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:45.583 00:24:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:45.583 00:24:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:45.583 1+0 records in 00:10:45.583 1+0 records out 00:10:45.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049578 s, 8.3 MB/s 00:10:45.583 00:24:39 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:45.583 00:24:39 -- common/autotest_common.sh@872 -- # size=4096 00:10:45.583 00:24:39 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:45.583 00:24:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:45.583 00:24:39 -- common/autotest_common.sh@875 -- # return 0 00:10:45.583 00:24:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:45.583 00:24:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:45.583 00:24:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:45.842 /dev/nbd1 00:10:45.842 00:24:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:45.842 00:24:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:45.842 00:24:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:10:45.842 00:24:39 -- common/autotest_common.sh@855 -- # local i 00:10:45.842 00:24:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:45.842 00:24:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:45.842 00:24:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:10:45.842 00:24:39 -- common/autotest_common.sh@859 -- # break 00:10:45.842 00:24:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:45.842 00:24:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:45.842 00:24:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:45.842 1+0 records in 00:10:45.842 1+0 records out 00:10:45.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377406 s, 10.9 MB/s 00:10:45.842 00:24:39 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:45.842 00:24:39 -- common/autotest_common.sh@872 -- # size=4096 00:10:45.842 00:24:39 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:45.842 00:24:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:45.842 00:24:39 -- common/autotest_common.sh@875 -- # return 0 00:10:45.842 00:24:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:45.842 00:24:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:45.842 00:24:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:45.842 00:24:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:45.842 00:24:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:46.100 00:24:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:46.100 { 00:10:46.100 "nbd_device": "/dev/nbd0", 00:10:46.100 "bdev_name": "Malloc0" 00:10:46.100 }, 00:10:46.100 { 00:10:46.100 "nbd_device": "/dev/nbd1", 00:10:46.100 "bdev_name": "Malloc1" 00:10:46.100 } 00:10:46.100 ]' 00:10:46.100 00:24:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:46.100 { 00:10:46.100 "nbd_device": "/dev/nbd0", 00:10:46.100 "bdev_name": "Malloc0" 00:10:46.100 }, 00:10:46.100 { 00:10:46.100 "nbd_device": "/dev/nbd1", 00:10:46.100 "bdev_name": "Malloc1" 00:10:46.100 } 00:10:46.100 ]' 00:10:46.100 00:24:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:46.358 /dev/nbd1' 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:46.358 /dev/nbd1' 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@65 -- # count=2 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@95 -- # count=2 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:46.358 256+0 records in 00:10:46.358 256+0 records out 00:10:46.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00732303 s, 143 MB/s 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:46.358 256+0 records in 00:10:46.358 256+0 records out 00:10:46.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262141 s, 40.0 MB/s 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:46.358 00:24:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:46.358 256+0 records in 00:10:46.358 256+0 records out 00:10:46.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0376025 s, 27.9 MB/s 00:10:46.358 00:24:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@51 -- # local i 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:46.359 00:24:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:46.652 00:24:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:46.652 00:24:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:46.652 00:24:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:46.652 00:24:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.652 00:24:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.652 00:24:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:46.652 00:24:40 -- bdev/nbd_common.sh@41 -- # break 00:10:46.652 00:24:40 -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.652 00:24:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:46.652 00:24:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:46.910 00:24:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:46.910 00:24:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:46.910 00:24:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:46.910 00:24:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:46.910 00:24:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:46.910 00:24:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:46.910 00:24:40 -- bdev/nbd_common.sh@41 -- # break 00:10:46.910 00:24:40 -- bdev/nbd_common.sh@45 -- # return 0 00:10:46.910 00:24:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:46.910 00:24:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.910 00:24:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:47.168 00:24:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:47.168 00:24:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:47.168 00:24:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:47.168 00:24:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:47.168 00:24:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:47.168 00:24:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:47.168 00:24:40 -- bdev/nbd_common.sh@65 -- # true 00:10:47.168 00:24:40 -- bdev/nbd_common.sh@65 -- # count=0 00:10:47.168 00:24:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:47.168 00:24:40 -- bdev/nbd_common.sh@104 -- # count=0 00:10:47.168 00:24:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:47.168 00:24:40 -- bdev/nbd_common.sh@109 -- # return 0 00:10:47.168 00:24:40 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:47.734 00:24:41 -- event/event.sh@35 -- # sleep 3 00:10:49.639 [2024-04-24 00:24:43.016835] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:49.639 [2024-04-24 00:24:43.266443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.639 [2024-04-24 00:24:43.266452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.907 [2024-04-24 00:24:43.535280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:49.907 [2024-04-24 00:24:43.535468] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:50.842 00:24:44 -- event/event.sh@23 -- # for i in {0..2} 00:10:50.842 spdk_app_start Round 1 00:10:50.842 00:24:44 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:50.842 00:24:44 -- event/event.sh@25 -- # waitforlisten 112547 /var/tmp/spdk-nbd.sock 00:10:50.842 00:24:44 -- common/autotest_common.sh@817 -- # '[' -z 112547 ']' 00:10:50.842 00:24:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:50.842 00:24:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:50.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:50.842 00:24:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:50.842 00:24:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:50.842 00:24:44 -- common/autotest_common.sh@10 -- # set +x 00:10:51.100 00:24:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:51.100 00:24:44 -- common/autotest_common.sh@850 -- # return 0 00:10:51.100 00:24:44 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:51.359 Malloc0 00:10:51.359 00:24:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:51.618 Malloc1 00:10:51.618 00:24:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@12 -- # local i 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.618 00:24:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:51.876 /dev/nbd0 00:10:51.876 00:24:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:51.876 00:24:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:51.876 00:24:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:10:51.876 00:24:45 -- common/autotest_common.sh@855 -- # local i 00:10:51.876 00:24:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:51.876 00:24:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:51.876 00:24:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:10:51.876 00:24:45 -- common/autotest_common.sh@859 -- # break 00:10:51.876 00:24:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:51.876 00:24:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:51.876 00:24:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:51.876 1+0 records in 00:10:51.876 1+0 records out 00:10:51.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025149 s, 16.3 MB/s 00:10:51.876 00:24:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:51.876 00:24:45 -- common/autotest_common.sh@872 -- # size=4096 00:10:51.876 00:24:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:51.876 00:24:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:51.876 00:24:45 -- common/autotest_common.sh@875 -- # return 0 00:10:51.876 00:24:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:51.876 00:24:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.876 00:24:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:52.134 /dev/nbd1 00:10:52.134 00:24:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:52.134 00:24:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:52.134 00:24:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:10:52.134 00:24:45 -- common/autotest_common.sh@855 -- # local i 00:10:52.134 00:24:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:52.134 00:24:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:52.134 00:24:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:10:52.134 00:24:45 -- common/autotest_common.sh@859 -- # break 00:10:52.134 00:24:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:52.134 00:24:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:52.134 00:24:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:52.134 1+0 records in 00:10:52.134 1+0 records out 00:10:52.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681228 s, 6.0 MB/s 00:10:52.134 00:24:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:52.134 00:24:45 -- common/autotest_common.sh@872 -- # size=4096 00:10:52.134 00:24:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:52.134 00:24:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:52.134 00:24:45 -- common/autotest_common.sh@875 -- # return 0 00:10:52.134 00:24:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:52.134 00:24:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:52.134 00:24:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:52.134 00:24:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.134 00:24:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:52.392 00:24:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:52.392 { 00:10:52.392 "nbd_device": "/dev/nbd0", 00:10:52.392 "bdev_name": "Malloc0" 00:10:52.392 }, 00:10:52.392 { 00:10:52.392 "nbd_device": "/dev/nbd1", 00:10:52.392 "bdev_name": "Malloc1" 00:10:52.392 } 00:10:52.392 ]' 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:52.650 { 00:10:52.650 "nbd_device": "/dev/nbd0", 00:10:52.650 "bdev_name": "Malloc0" 00:10:52.650 }, 00:10:52.650 { 00:10:52.650 "nbd_device": "/dev/nbd1", 00:10:52.650 "bdev_name": "Malloc1" 00:10:52.650 } 00:10:52.650 ]' 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:52.650 /dev/nbd1' 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:52.650 /dev/nbd1' 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@65 -- # count=2 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@95 -- # count=2 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:52.650 256+0 records in 00:10:52.650 256+0 records out 00:10:52.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119091 s, 88.0 MB/s 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:52.650 256+0 records in 00:10:52.650 256+0 records out 00:10:52.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306507 s, 34.2 MB/s 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:52.650 256+0 records in 00:10:52.650 256+0 records out 00:10:52.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275576 s, 38.1 MB/s 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@51 -- # local i 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:52.650 00:24:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:52.908 00:24:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:52.908 00:24:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:52.908 00:24:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:52.908 00:24:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.908 00:24:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.908 00:24:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:52.908 00:24:46 -- bdev/nbd_common.sh@41 -- # break 00:10:52.908 00:24:46 -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.908 00:24:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:52.908 00:24:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:53.474 00:24:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:53.474 00:24:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:53.474 00:24:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:53.474 00:24:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:53.474 00:24:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:53.474 00:24:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:53.474 00:24:46 -- bdev/nbd_common.sh@41 -- # break 00:10:53.474 00:24:46 -- bdev/nbd_common.sh@45 -- # return 0 00:10:53.474 00:24:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:53.474 00:24:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:53.474 00:24:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:53.731 00:24:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:53.731 00:24:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:53.731 00:24:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:53.731 00:24:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:53.731 00:24:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:53.731 00:24:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:53.731 00:24:47 -- bdev/nbd_common.sh@65 -- # true 00:10:53.731 00:24:47 -- bdev/nbd_common.sh@65 -- # count=0 00:10:53.731 00:24:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:53.731 00:24:47 -- bdev/nbd_common.sh@104 -- # count=0 00:10:53.731 00:24:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:53.731 00:24:47 -- bdev/nbd_common.sh@109 -- # return 0 00:10:53.731 00:24:47 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:54.297 00:24:47 -- event/event.sh@35 -- # sleep 3 00:10:56.197 [2024-04-24 00:24:49.532231] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:56.197 [2024-04-24 00:24:49.782373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.197 [2024-04-24 00:24:49.782383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.455 [2024-04-24 00:24:50.046371] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:56.455 [2024-04-24 00:24:50.046515] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:57.387 00:24:50 -- event/event.sh@23 -- # for i in {0..2} 00:10:57.387 spdk_app_start Round 2 00:10:57.387 00:24:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:57.387 00:24:50 -- event/event.sh@25 -- # waitforlisten 112547 /var/tmp/spdk-nbd.sock 00:10:57.387 00:24:50 -- common/autotest_common.sh@817 -- # '[' -z 112547 ']' 00:10:57.387 00:24:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:57.387 00:24:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:57.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:57.387 00:24:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:57.387 00:24:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:57.387 00:24:50 -- common/autotest_common.sh@10 -- # set +x 00:10:57.682 00:24:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:57.682 00:24:51 -- common/autotest_common.sh@850 -- # return 0 00:10:57.682 00:24:51 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:57.940 Malloc0 00:10:58.198 00:24:51 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:58.456 Malloc1 00:10:58.456 00:24:52 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@12 -- # local i 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:58.456 00:24:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:59.020 /dev/nbd0 00:10:59.020 00:24:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:59.020 00:24:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:59.020 00:24:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:10:59.020 00:24:52 -- common/autotest_common.sh@855 -- # local i 00:10:59.020 00:24:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:59.020 00:24:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:59.020 00:24:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:10:59.020 00:24:52 -- common/autotest_common.sh@859 -- # break 00:10:59.020 00:24:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:59.020 00:24:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:59.020 00:24:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:59.020 1+0 records in 00:10:59.020 1+0 records out 00:10:59.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506636 s, 8.1 MB/s 00:10:59.020 00:24:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:59.020 00:24:52 -- common/autotest_common.sh@872 -- # size=4096 00:10:59.020 00:24:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:59.020 00:24:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:59.020 00:24:52 -- common/autotest_common.sh@875 -- # return 0 00:10:59.020 00:24:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:59.020 00:24:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:59.020 00:24:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:59.277 /dev/nbd1 00:10:59.535 00:24:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:59.535 00:24:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:59.535 00:24:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:10:59.535 00:24:53 -- common/autotest_common.sh@855 -- # local i 00:10:59.535 00:24:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:59.535 00:24:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:59.535 00:24:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:10:59.535 00:24:53 -- common/autotest_common.sh@859 -- # break 00:10:59.536 00:24:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:59.536 00:24:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:59.536 00:24:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:59.536 1+0 records in 00:10:59.536 1+0 records out 00:10:59.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521264 s, 7.9 MB/s 00:10:59.536 00:24:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:59.536 00:24:53 -- common/autotest_common.sh@872 -- # size=4096 00:10:59.536 00:24:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:59.536 00:24:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:59.536 00:24:53 -- common/autotest_common.sh@875 -- # return 0 00:10:59.536 00:24:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:59.536 00:24:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:59.536 00:24:53 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:59.536 00:24:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.536 00:24:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:59.794 00:24:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:59.794 { 00:10:59.794 "nbd_device": "/dev/nbd0", 00:10:59.794 "bdev_name": "Malloc0" 00:10:59.794 }, 00:10:59.794 { 00:10:59.794 "nbd_device": "/dev/nbd1", 00:10:59.794 "bdev_name": "Malloc1" 00:10:59.794 } 00:10:59.794 ]' 00:10:59.794 00:24:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:59.794 { 00:10:59.794 "nbd_device": "/dev/nbd0", 00:10:59.794 "bdev_name": "Malloc0" 00:10:59.794 }, 00:10:59.794 { 00:10:59.794 "nbd_device": "/dev/nbd1", 00:10:59.794 "bdev_name": "Malloc1" 00:10:59.794 } 00:10:59.794 ]' 00:10:59.794 00:24:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:00.052 /dev/nbd1' 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:00.052 /dev/nbd1' 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@65 -- # count=2 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@66 -- # echo 2 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@95 -- # count=2 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:00.052 256+0 records in 00:11:00.052 256+0 records out 00:11:00.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00870914 s, 120 MB/s 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:00.052 256+0 records in 00:11:00.052 256+0 records out 00:11:00.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224713 s, 46.7 MB/s 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:00.052 256+0 records in 00:11:00.052 256+0 records out 00:11:00.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0338063 s, 31.0 MB/s 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:00.052 00:24:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:00.053 00:24:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:00.053 00:24:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:00.053 00:24:53 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:00.053 00:24:53 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:00.053 00:24:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.053 00:24:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:00.053 00:24:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:00.053 00:24:53 -- bdev/nbd_common.sh@51 -- # local i 00:11:00.053 00:24:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.053 00:24:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:00.311 00:24:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:00.311 00:24:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:00.311 00:24:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:00.311 00:24:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.311 00:24:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.311 00:24:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:00.311 00:24:53 -- bdev/nbd_common.sh@41 -- # break 00:11:00.311 00:24:53 -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.311 00:24:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.311 00:24:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:00.569 00:24:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:00.569 00:24:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:00.569 00:24:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:00.569 00:24:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.569 00:24:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.569 00:24:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:00.569 00:24:54 -- bdev/nbd_common.sh@41 -- # break 00:11:00.569 00:24:54 -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.569 00:24:54 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:00.569 00:24:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:00.569 00:24:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:00.827 00:24:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:00.827 00:24:54 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:00.827 00:24:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:00.827 00:24:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:00.827 00:24:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:00.827 00:24:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:00.827 00:24:54 -- bdev/nbd_common.sh@65 -- # true 00:11:00.827 00:24:54 -- bdev/nbd_common.sh@65 -- # count=0 00:11:00.827 00:24:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:00.828 00:24:54 -- bdev/nbd_common.sh@104 -- # count=0 00:11:00.828 00:24:54 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:00.828 00:24:54 -- bdev/nbd_common.sh@109 -- # return 0 00:11:00.828 00:24:54 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:01.392 00:24:55 -- event/event.sh@35 -- # sleep 3 00:11:03.291 [2024-04-24 00:24:56.830634] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:03.549 [2024-04-24 00:24:57.106719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.549 [2024-04-24 00:24:57.106723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.807 [2024-04-24 00:24:57.379107] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:03.807 [2024-04-24 00:24:57.379250] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:04.372 00:24:58 -- event/event.sh@38 -- # waitforlisten 112547 /var/tmp/spdk-nbd.sock 00:11:04.372 00:24:58 -- common/autotest_common.sh@817 -- # '[' -z 112547 ']' 00:11:04.372 00:24:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:04.372 00:24:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:04.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:04.372 00:24:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:04.372 00:24:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:04.372 00:24:58 -- common/autotest_common.sh@10 -- # set +x 00:11:04.937 00:24:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:04.937 00:24:58 -- common/autotest_common.sh@850 -- # return 0 00:11:04.937 00:24:58 -- event/event.sh@39 -- # killprocess 112547 00:11:04.937 00:24:58 -- common/autotest_common.sh@936 -- # '[' -z 112547 ']' 00:11:04.938 00:24:58 -- common/autotest_common.sh@940 -- # kill -0 112547 00:11:04.938 00:24:58 -- common/autotest_common.sh@941 -- # uname 00:11:04.938 00:24:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:04.938 00:24:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112547 00:11:04.938 00:24:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:04.938 00:24:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:04.938 killing process with pid 112547 00:11:04.938 00:24:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112547' 00:11:04.938 00:24:58 -- common/autotest_common.sh@955 -- # kill 112547 00:11:04.938 00:24:58 -- common/autotest_common.sh@960 -- # wait 112547 00:11:06.309 spdk_app_start is called in Round 0. 00:11:06.309 Shutdown signal received, stop current app iteration 00:11:06.309 Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 reinitialization... 00:11:06.309 spdk_app_start is called in Round 1. 00:11:06.309 Shutdown signal received, stop current app iteration 00:11:06.309 Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 reinitialization... 00:11:06.309 spdk_app_start is called in Round 2. 00:11:06.309 Shutdown signal received, stop current app iteration 00:11:06.309 Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 reinitialization... 00:11:06.309 spdk_app_start is called in Round 3. 00:11:06.309 Shutdown signal received, stop current app iteration 00:11:06.309 ************************************ 00:11:06.309 END TEST app_repeat 00:11:06.309 ************************************ 00:11:06.309 00:25:00 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:06.309 00:25:00 -- event/event.sh@42 -- # return 0 00:11:06.309 00:11:06.309 real 0m23.080s 00:11:06.309 user 0m49.393s 00:11:06.309 sys 0m3.548s 00:11:06.310 00:25:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:06.310 00:25:00 -- common/autotest_common.sh@10 -- # set +x 00:11:06.310 00:25:00 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:06.310 00:25:00 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:06.310 00:25:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:06.310 00:25:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:06.310 00:25:00 -- common/autotest_common.sh@10 -- # set +x 00:11:06.566 ************************************ 00:11:06.566 START TEST cpu_locks 00:11:06.566 ************************************ 00:11:06.566 00:25:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:06.566 * Looking for test storage... 00:11:06.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:06.566 00:25:00 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:06.566 00:25:00 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:06.566 00:25:00 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:06.566 00:25:00 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:06.566 00:25:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:06.566 00:25:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:06.566 00:25:00 -- common/autotest_common.sh@10 -- # set +x 00:11:06.566 ************************************ 00:11:06.566 START TEST default_locks 00:11:06.566 ************************************ 00:11:06.566 00:25:00 -- common/autotest_common.sh@1111 -- # default_locks 00:11:06.567 00:25:00 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=113118 00:11:06.567 00:25:00 -- event/cpu_locks.sh@47 -- # waitforlisten 113118 00:11:06.567 00:25:00 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:06.567 00:25:00 -- common/autotest_common.sh@817 -- # '[' -z 113118 ']' 00:11:06.567 00:25:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.567 00:25:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:06.567 00:25:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.567 00:25:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:06.567 00:25:00 -- common/autotest_common.sh@10 -- # set +x 00:11:06.824 [2024-04-24 00:25:00.356022] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:11:06.824 [2024-04-24 00:25:00.356289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113118 ] 00:11:06.824 [2024-04-24 00:25:00.542105] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.082 [2024-04-24 00:25:00.837401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.456 00:25:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:08.456 00:25:01 -- common/autotest_common.sh@850 -- # return 0 00:11:08.456 00:25:01 -- event/cpu_locks.sh@49 -- # locks_exist 113118 00:11:08.456 00:25:01 -- event/cpu_locks.sh@22 -- # lslocks -p 113118 00:11:08.456 00:25:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:08.456 00:25:02 -- event/cpu_locks.sh@50 -- # killprocess 113118 00:11:08.456 00:25:02 -- common/autotest_common.sh@936 -- # '[' -z 113118 ']' 00:11:08.456 00:25:02 -- common/autotest_common.sh@940 -- # kill -0 113118 00:11:08.456 00:25:02 -- common/autotest_common.sh@941 -- # uname 00:11:08.456 00:25:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:08.456 00:25:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113118 00:11:08.456 00:25:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:08.456 killing process with pid 113118 00:11:08.456 00:25:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:08.456 00:25:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113118' 00:11:08.456 00:25:02 -- common/autotest_common.sh@955 -- # kill 113118 00:11:08.456 00:25:02 -- common/autotest_common.sh@960 -- # wait 113118 00:11:11.767 00:25:04 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 113118 00:11:11.767 00:25:04 -- common/autotest_common.sh@638 -- # local es=0 00:11:11.767 00:25:04 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 113118 00:11:11.767 00:25:04 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:11:11.767 00:25:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:11.767 00:25:04 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:11:11.767 00:25:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:11.767 00:25:04 -- common/autotest_common.sh@641 -- # waitforlisten 113118 00:11:11.767 00:25:04 -- common/autotest_common.sh@817 -- # '[' -z 113118 ']' 00:11:11.767 00:25:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.767 00:25:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:11.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.767 00:25:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.767 00:25:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:11.767 00:25:04 -- common/autotest_common.sh@10 -- # set +x 00:11:11.767 ERROR: process (pid: 113118) is no longer running 00:11:11.767 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (113118) - No such process 00:11:11.767 00:25:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:11.767 00:25:04 -- common/autotest_common.sh@850 -- # return 1 00:11:11.767 00:25:04 -- common/autotest_common.sh@641 -- # es=1 00:11:11.767 00:25:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:11.767 00:25:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:11.767 00:25:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:11.767 00:25:04 -- event/cpu_locks.sh@54 -- # no_locks 00:11:11.767 00:25:04 -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:11.767 00:25:04 -- event/cpu_locks.sh@26 -- # local lock_files 00:11:11.767 00:25:04 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:11.767 00:11:11.767 real 0m4.719s 00:11:11.767 user 0m4.770s 00:11:11.767 sys 0m0.755s 00:11:11.767 00:25:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:11.767 ************************************ 00:11:11.767 END TEST default_locks 00:11:11.767 ************************************ 00:11:11.767 00:25:04 -- common/autotest_common.sh@10 -- # set +x 00:11:11.767 00:25:05 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:11.767 00:25:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:11.768 00:25:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:11.768 00:25:05 -- common/autotest_common.sh@10 -- # set +x 00:11:11.768 ************************************ 00:11:11.768 START TEST default_locks_via_rpc 00:11:11.768 ************************************ 00:11:11.768 00:25:05 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:11:11.768 00:25:05 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:11.768 00:25:05 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=113206 00:11:11.768 00:25:05 -- event/cpu_locks.sh@63 -- # waitforlisten 113206 00:11:11.768 00:25:05 -- common/autotest_common.sh@817 -- # '[' -z 113206 ']' 00:11:11.768 00:25:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.768 00:25:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:11.768 00:25:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.768 00:25:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:11.768 00:25:05 -- common/autotest_common.sh@10 -- # set +x 00:11:11.768 [2024-04-24 00:25:05.183278] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:11:11.768 [2024-04-24 00:25:05.183580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113206 ] 00:11:11.768 [2024-04-24 00:25:05.367393] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.025 [2024-04-24 00:25:05.596527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.968 00:25:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:12.968 00:25:06 -- common/autotest_common.sh@850 -- # return 0 00:11:12.968 00:25:06 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:11:12.968 00:25:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.968 00:25:06 -- common/autotest_common.sh@10 -- # set +x 00:11:12.968 00:25:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.968 00:25:06 -- event/cpu_locks.sh@67 -- # no_locks 00:11:12.968 00:25:06 -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:12.968 00:25:06 -- event/cpu_locks.sh@26 -- # local lock_files 00:11:12.968 00:25:06 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:12.968 00:25:06 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:11:12.968 00:25:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.968 00:25:06 -- common/autotest_common.sh@10 -- # set +x 00:11:12.968 00:25:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.968 00:25:06 -- event/cpu_locks.sh@71 -- # locks_exist 113206 00:11:12.968 00:25:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:12.968 00:25:06 -- event/cpu_locks.sh@22 -- # lslocks -p 113206 00:11:13.227 00:25:06 -- event/cpu_locks.sh@73 -- # killprocess 113206 00:11:13.227 00:25:06 -- common/autotest_common.sh@936 -- # '[' -z 113206 ']' 00:11:13.227 00:25:06 -- common/autotest_common.sh@940 -- # kill -0 113206 00:11:13.227 00:25:06 -- common/autotest_common.sh@941 -- # uname 00:11:13.227 00:25:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:13.227 00:25:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113206 00:11:13.227 00:25:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:13.227 killing process with pid 113206 00:11:13.227 00:25:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:13.227 00:25:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113206' 00:11:13.227 00:25:06 -- common/autotest_common.sh@955 -- # kill 113206 00:11:13.227 00:25:06 -- common/autotest_common.sh@960 -- # wait 113206 00:11:16.513 00:11:16.513 real 0m4.480s 00:11:16.513 user 0m4.520s 00:11:16.513 sys 0m0.747s 00:11:16.513 00:25:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:16.513 ************************************ 00:11:16.513 00:25:09 -- common/autotest_common.sh@10 -- # set +x 00:11:16.513 END TEST default_locks_via_rpc 00:11:16.513 ************************************ 00:11:16.513 00:25:09 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:16.513 00:25:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:16.513 00:25:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.513 00:25:09 -- common/autotest_common.sh@10 -- # set +x 00:11:16.513 ************************************ 00:11:16.513 START TEST non_locking_app_on_locked_coremask 00:11:16.513 ************************************ 00:11:16.513 00:25:09 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:11:16.513 00:25:09 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=113309 00:11:16.513 00:25:09 -- event/cpu_locks.sh@81 -- # waitforlisten 113309 /var/tmp/spdk.sock 00:11:16.513 00:25:09 -- common/autotest_common.sh@817 -- # '[' -z 113309 ']' 00:11:16.513 00:25:09 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:16.513 00:25:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.513 00:25:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:16.513 00:25:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.513 00:25:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:16.513 00:25:09 -- common/autotest_common.sh@10 -- # set +x 00:11:16.513 [2024-04-24 00:25:09.748210] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:11:16.513 [2024-04-24 00:25:09.748410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113309 ] 00:11:16.513 [2024-04-24 00:25:09.926069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.513 [2024-04-24 00:25:10.157909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.448 00:25:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:17.448 00:25:11 -- common/autotest_common.sh@850 -- # return 0 00:11:17.448 00:25:11 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=113329 00:11:17.448 00:25:11 -- event/cpu_locks.sh@85 -- # waitforlisten 113329 /var/tmp/spdk2.sock 00:11:17.448 00:25:11 -- common/autotest_common.sh@817 -- # '[' -z 113329 ']' 00:11:17.448 00:25:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:17.448 00:25:11 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:17.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:17.448 00:25:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:17.448 00:25:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:17.448 00:25:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:17.448 00:25:11 -- common/autotest_common.sh@10 -- # set +x 00:11:17.448 [2024-04-24 00:25:11.190649] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:11:17.448 [2024-04-24 00:25:11.191371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113329 ] 00:11:17.706 [2024-04-24 00:25:11.353161] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:17.706 [2024-04-24 00:25:11.353254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.271 [2024-04-24 00:25:11.809558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.797 00:25:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:20.797 00:25:14 -- common/autotest_common.sh@850 -- # return 0 00:11:20.797 00:25:14 -- event/cpu_locks.sh@87 -- # locks_exist 113309 00:11:20.797 00:25:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:20.797 00:25:14 -- event/cpu_locks.sh@22 -- # lslocks -p 113309 00:11:20.797 00:25:14 -- event/cpu_locks.sh@89 -- # killprocess 113309 00:11:20.797 00:25:14 -- common/autotest_common.sh@936 -- # '[' -z 113309 ']' 00:11:20.797 00:25:14 -- common/autotest_common.sh@940 -- # kill -0 113309 00:11:20.797 00:25:14 -- common/autotest_common.sh@941 -- # uname 00:11:20.797 00:25:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:20.797 00:25:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113309 00:11:21.056 00:25:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:21.056 00:25:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:21.056 killing process with pid 113309 00:11:21.056 00:25:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113309' 00:11:21.056 00:25:14 -- common/autotest_common.sh@955 -- # kill 113309 00:11:21.056 00:25:14 -- common/autotest_common.sh@960 -- # wait 113309 00:11:26.316 00:25:20 -- event/cpu_locks.sh@90 -- # killprocess 113329 00:11:26.316 00:25:20 -- common/autotest_common.sh@936 -- # '[' -z 113329 ']' 00:11:26.316 00:25:20 -- common/autotest_common.sh@940 -- # kill -0 113329 00:11:26.316 00:25:20 -- common/autotest_common.sh@941 -- # uname 00:11:26.316 00:25:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:26.316 00:25:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113329 00:11:26.316 00:25:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:26.316 killing process with pid 113329 00:11:26.316 00:25:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:26.316 00:25:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113329' 00:11:26.316 00:25:20 -- common/autotest_common.sh@955 -- # kill 113329 00:11:26.316 00:25:20 -- common/autotest_common.sh@960 -- # wait 113329 00:11:29.597 00:11:29.597 real 0m13.054s 00:11:29.597 user 0m13.644s 00:11:29.597 sys 0m1.396s 00:11:29.597 ************************************ 00:11:29.597 END TEST non_locking_app_on_locked_coremask 00:11:29.597 ************************************ 00:11:29.597 00:25:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:29.597 00:25:22 -- common/autotest_common.sh@10 -- # set +x 00:11:29.597 00:25:22 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:29.597 00:25:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:29.597 00:25:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.597 00:25:22 -- common/autotest_common.sh@10 -- # set +x 00:11:29.597 ************************************ 00:11:29.597 START TEST locking_app_on_unlocked_coremask 00:11:29.597 ************************************ 00:11:29.597 00:25:22 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:11:29.597 00:25:22 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=113510 00:11:29.597 00:25:22 -- event/cpu_locks.sh@99 -- # waitforlisten 113510 /var/tmp/spdk.sock 00:11:29.597 00:25:22 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:29.597 00:25:22 -- common/autotest_common.sh@817 -- # '[' -z 113510 ']' 00:11:29.597 00:25:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.598 00:25:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:29.598 00:25:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.598 00:25:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:29.598 00:25:22 -- common/autotest_common.sh@10 -- # set +x 00:11:29.598 [2024-04-24 00:25:22.894266] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:11:29.598 [2024-04-24 00:25:22.894481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113510 ] 00:11:29.598 [2024-04-24 00:25:23.092021] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:29.598 [2024-04-24 00:25:23.092138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.598 [2024-04-24 00:25:23.379105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.975 00:25:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:30.975 00:25:24 -- common/autotest_common.sh@850 -- # return 0 00:11:30.975 00:25:24 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=113536 00:11:30.975 00:25:24 -- event/cpu_locks.sh@103 -- # waitforlisten 113536 /var/tmp/spdk2.sock 00:11:30.975 00:25:24 -- common/autotest_common.sh@817 -- # '[' -z 113536 ']' 00:11:30.975 00:25:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:30.975 00:25:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:30.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:30.975 00:25:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:30.975 00:25:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:30.975 00:25:24 -- common/autotest_common.sh@10 -- # set +x 00:11:30.975 00:25:24 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:30.975 [2024-04-24 00:25:24.473202] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:11:30.976 [2024-04-24 00:25:24.473367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113536 ] 00:11:30.976 [2024-04-24 00:25:24.636616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.542 [2024-04-24 00:25:25.133531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.126 00:25:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:34.126 00:25:27 -- common/autotest_common.sh@850 -- # return 0 00:11:34.126 00:25:27 -- event/cpu_locks.sh@105 -- # locks_exist 113536 00:11:34.126 00:25:27 -- event/cpu_locks.sh@22 -- # lslocks -p 113536 00:11:34.126 00:25:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:34.126 00:25:27 -- event/cpu_locks.sh@107 -- # killprocess 113510 00:11:34.126 00:25:27 -- common/autotest_common.sh@936 -- # '[' -z 113510 ']' 00:11:34.126 00:25:27 -- common/autotest_common.sh@940 -- # kill -0 113510 00:11:34.126 00:25:27 -- common/autotest_common.sh@941 -- # uname 00:11:34.126 00:25:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:34.126 00:25:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113510 00:11:34.383 00:25:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:34.383 00:25:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:34.383 killing process with pid 113510 00:11:34.383 00:25:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113510' 00:11:34.383 00:25:27 -- common/autotest_common.sh@955 -- # kill 113510 00:11:34.383 00:25:27 -- common/autotest_common.sh@960 -- # wait 113510 00:11:39.646 00:25:33 -- event/cpu_locks.sh@108 -- # killprocess 113536 00:11:39.646 00:25:33 -- common/autotest_common.sh@936 -- # '[' -z 113536 ']' 00:11:39.646 00:25:33 -- common/autotest_common.sh@940 -- # kill -0 113536 00:11:39.646 00:25:33 -- common/autotest_common.sh@941 -- # uname 00:11:39.646 00:25:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:39.646 00:25:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113536 00:11:39.646 00:25:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:39.646 killing process with pid 113536 00:11:39.646 00:25:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:39.646 00:25:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113536' 00:11:39.646 00:25:33 -- common/autotest_common.sh@955 -- # kill 113536 00:11:39.646 00:25:33 -- common/autotest_common.sh@960 -- # wait 113536 00:11:42.954 00:11:42.954 real 0m13.425s 00:11:42.954 user 0m14.103s 00:11:42.954 sys 0m1.387s 00:11:42.954 ************************************ 00:11:42.954 END TEST locking_app_on_unlocked_coremask 00:11:42.954 ************************************ 00:11:42.954 00:25:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:42.954 00:25:36 -- common/autotest_common.sh@10 -- # set +x 00:11:42.954 00:25:36 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:42.954 00:25:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:42.954 00:25:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:42.954 00:25:36 -- common/autotest_common.sh@10 -- # set +x 00:11:42.954 ************************************ 00:11:42.954 START TEST locking_app_on_locked_coremask 00:11:42.954 ************************************ 00:11:42.954 00:25:36 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:11:42.954 00:25:36 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=113716 00:11:42.954 00:25:36 -- event/cpu_locks.sh@116 -- # waitforlisten 113716 /var/tmp/spdk.sock 00:11:42.954 00:25:36 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:42.954 00:25:36 -- common/autotest_common.sh@817 -- # '[' -z 113716 ']' 00:11:42.954 00:25:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.954 00:25:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:42.954 00:25:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.954 00:25:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:42.954 00:25:36 -- common/autotest_common.sh@10 -- # set +x 00:11:42.954 [2024-04-24 00:25:36.463708] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:11:42.954 [2024-04-24 00:25:36.463942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113716 ] 00:11:42.954 [2024-04-24 00:25:36.660577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.521 [2024-04-24 00:25:37.022650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.521 00:25:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:44.521 00:25:38 -- common/autotest_common.sh@850 -- # return 0 00:11:44.521 00:25:38 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=113751 00:11:44.521 00:25:38 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 113751 /var/tmp/spdk2.sock 00:11:44.521 00:25:38 -- common/autotest_common.sh@638 -- # local es=0 00:11:44.521 00:25:38 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 113751 /var/tmp/spdk2.sock 00:11:44.521 00:25:38 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:11:44.521 00:25:38 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:44.521 00:25:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:44.521 00:25:38 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:11:44.521 00:25:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:44.521 00:25:38 -- common/autotest_common.sh@641 -- # waitforlisten 113751 /var/tmp/spdk2.sock 00:11:44.521 00:25:38 -- common/autotest_common.sh@817 -- # '[' -z 113751 ']' 00:11:44.521 00:25:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:44.521 00:25:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:44.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:44.522 00:25:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:44.522 00:25:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:44.522 00:25:38 -- common/autotest_common.sh@10 -- # set +x 00:11:44.522 [2024-04-24 00:25:38.253027] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:11:44.522 [2024-04-24 00:25:38.253322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113751 ] 00:11:44.780 [2024-04-24 00:25:38.437395] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 113716 has claimed it. 00:11:44.780 [2024-04-24 00:25:38.437566] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:45.347 ERROR: process (pid: 113751) is no longer running 00:11:45.347 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (113751) - No such process 00:11:45.347 00:25:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:45.347 00:25:38 -- common/autotest_common.sh@850 -- # return 1 00:11:45.347 00:25:38 -- common/autotest_common.sh@641 -- # es=1 00:11:45.347 00:25:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:45.347 00:25:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:45.347 00:25:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:45.347 00:25:38 -- event/cpu_locks.sh@122 -- # locks_exist 113716 00:11:45.347 00:25:38 -- event/cpu_locks.sh@22 -- # lslocks -p 113716 00:11:45.347 00:25:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:45.605 00:25:39 -- event/cpu_locks.sh@124 -- # killprocess 113716 00:11:45.605 00:25:39 -- common/autotest_common.sh@936 -- # '[' -z 113716 ']' 00:11:45.605 00:25:39 -- common/autotest_common.sh@940 -- # kill -0 113716 00:11:45.605 00:25:39 -- common/autotest_common.sh@941 -- # uname 00:11:45.605 00:25:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:45.605 00:25:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113716 00:11:45.605 00:25:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:45.605 00:25:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:45.605 killing process with pid 113716 00:11:45.605 00:25:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113716' 00:11:45.605 00:25:39 -- common/autotest_common.sh@955 -- # kill 113716 00:11:45.605 00:25:39 -- common/autotest_common.sh@960 -- # wait 113716 00:11:48.912 00:11:48.912 real 0m5.746s 00:11:48.912 user 0m5.800s 00:11:48.912 sys 0m1.107s 00:11:48.912 00:25:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:48.912 00:25:42 -- common/autotest_common.sh@10 -- # set +x 00:11:48.912 ************************************ 00:11:48.912 END TEST locking_app_on_locked_coremask 00:11:48.912 ************************************ 00:11:48.912 00:25:42 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:48.912 00:25:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:48.912 00:25:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.912 00:25:42 -- common/autotest_common.sh@10 -- # set +x 00:11:48.913 ************************************ 00:11:48.913 START TEST locking_overlapped_coremask 00:11:48.913 ************************************ 00:11:48.913 00:25:42 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:11:48.913 00:25:42 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=113836 00:11:48.913 00:25:42 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:48.913 00:25:42 -- event/cpu_locks.sh@133 -- # waitforlisten 113836 /var/tmp/spdk.sock 00:11:48.913 00:25:42 -- common/autotest_common.sh@817 -- # '[' -z 113836 ']' 00:11:48.913 00:25:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.913 00:25:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:48.913 00:25:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.913 00:25:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:48.913 00:25:42 -- common/autotest_common.sh@10 -- # set +x 00:11:48.913 [2024-04-24 00:25:42.265055] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:11:48.913 [2024-04-24 00:25:42.265465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113836 ] 00:11:48.913 [2024-04-24 00:25:42.461200] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:49.171 [2024-04-24 00:25:42.751689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.171 [2024-04-24 00:25:42.751835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.171 [2024-04-24 00:25:42.751852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.105 00:25:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:50.105 00:25:43 -- common/autotest_common.sh@850 -- # return 0 00:11:50.105 00:25:43 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=113859 00:11:50.105 00:25:43 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 113859 /var/tmp/spdk2.sock 00:11:50.105 00:25:43 -- common/autotest_common.sh@638 -- # local es=0 00:11:50.105 00:25:43 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 113859 /var/tmp/spdk2.sock 00:11:50.105 00:25:43 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:11:50.105 00:25:43 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:50.105 00:25:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:50.105 00:25:43 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:11:50.105 00:25:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:50.105 00:25:43 -- common/autotest_common.sh@641 -- # waitforlisten 113859 /var/tmp/spdk2.sock 00:11:50.105 00:25:43 -- common/autotest_common.sh@817 -- # '[' -z 113859 ']' 00:11:50.105 00:25:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:50.105 00:25:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:50.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:50.105 00:25:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:50.105 00:25:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:50.105 00:25:43 -- common/autotest_common.sh@10 -- # set +x 00:11:50.363 [2024-04-24 00:25:43.910847] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:11:50.363 [2024-04-24 00:25:43.911369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113859 ] 00:11:50.363 [2024-04-24 00:25:44.139565] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 113836 has claimed it. 00:11:50.363 [2024-04-24 00:25:44.139895] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:50.929 ERROR: process (pid: 113859) is no longer running 00:11:50.929 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (113859) - No such process 00:11:50.929 00:25:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:50.929 00:25:44 -- common/autotest_common.sh@850 -- # return 1 00:11:50.929 00:25:44 -- common/autotest_common.sh@641 -- # es=1 00:11:50.929 00:25:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:50.929 00:25:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:50.929 00:25:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:50.929 00:25:44 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:50.929 00:25:44 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:50.929 00:25:44 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:50.929 00:25:44 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:50.929 00:25:44 -- event/cpu_locks.sh@141 -- # killprocess 113836 00:11:50.929 00:25:44 -- common/autotest_common.sh@936 -- # '[' -z 113836 ']' 00:11:50.929 00:25:44 -- common/autotest_common.sh@940 -- # kill -0 113836 00:11:50.929 00:25:44 -- common/autotest_common.sh@941 -- # uname 00:11:50.929 00:25:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:50.929 00:25:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113836 00:11:50.929 00:25:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:50.929 killing process with pid 113836 00:11:50.929 00:25:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:50.929 00:25:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113836' 00:11:50.929 00:25:44 -- common/autotest_common.sh@955 -- # kill 113836 00:11:50.929 00:25:44 -- common/autotest_common.sh@960 -- # wait 113836 00:11:54.212 ************************************ 00:11:54.212 END TEST locking_overlapped_coremask 00:11:54.212 ************************************ 00:11:54.212 00:11:54.212 real 0m5.215s 00:11:54.212 user 0m13.832s 00:11:54.212 sys 0m0.646s 00:11:54.212 00:25:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:54.212 00:25:47 -- common/autotest_common.sh@10 -- # set +x 00:11:54.212 00:25:47 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:54.212 00:25:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:54.212 00:25:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.212 00:25:47 -- common/autotest_common.sh@10 -- # set +x 00:11:54.212 ************************************ 00:11:54.212 START TEST locking_overlapped_coremask_via_rpc 00:11:54.212 ************************************ 00:11:54.212 00:25:47 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:11:54.212 00:25:47 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=113941 00:11:54.212 00:25:47 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:54.212 00:25:47 -- event/cpu_locks.sh@149 -- # waitforlisten 113941 /var/tmp/spdk.sock 00:11:54.212 00:25:47 -- common/autotest_common.sh@817 -- # '[' -z 113941 ']' 00:11:54.212 00:25:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.212 00:25:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:54.212 00:25:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.212 00:25:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:54.212 00:25:47 -- common/autotest_common.sh@10 -- # set +x 00:11:54.212 [2024-04-24 00:25:47.584406] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:11:54.212 [2024-04-24 00:25:47.584849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113941 ] 00:11:54.212 [2024-04-24 00:25:47.784337] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:54.212 [2024-04-24 00:25:47.784706] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:54.470 [2024-04-24 00:25:48.087167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.470 [2024-04-24 00:25:48.087320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.470 [2024-04-24 00:25:48.087327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:55.404 00:25:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:55.404 00:25:49 -- common/autotest_common.sh@850 -- # return 0 00:11:55.404 00:25:49 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=113969 00:11:55.404 00:25:49 -- event/cpu_locks.sh@153 -- # waitforlisten 113969 /var/tmp/spdk2.sock 00:11:55.404 00:25:49 -- common/autotest_common.sh@817 -- # '[' -z 113969 ']' 00:11:55.404 00:25:49 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:55.404 00:25:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:55.404 00:25:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:55.404 00:25:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:55.404 00:25:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:55.404 00:25:49 -- common/autotest_common.sh@10 -- # set +x 00:11:55.662 [2024-04-24 00:25:49.194570] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:11:55.662 [2024-04-24 00:25:49.195161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113969 ] 00:11:55.662 [2024-04-24 00:25:49.396812] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:55.662 [2024-04-24 00:25:49.410989] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:56.227 [2024-04-24 00:25:49.901967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.227 [2024-04-24 00:25:49.902119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.227 [2024-04-24 00:25:49.902120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:58.832 00:25:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:58.832 00:25:52 -- common/autotest_common.sh@850 -- # return 0 00:11:58.832 00:25:52 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:58.832 00:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.832 00:25:52 -- common/autotest_common.sh@10 -- # set +x 00:11:58.832 00:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:58.832 00:25:52 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:58.832 00:25:52 -- common/autotest_common.sh@638 -- # local es=0 00:11:58.832 00:25:52 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:58.832 00:25:52 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:11:58.832 00:25:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:58.832 00:25:52 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:11:58.832 00:25:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:58.832 00:25:52 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:58.832 00:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:58.832 00:25:52 -- common/autotest_common.sh@10 -- # set +x 00:11:58.832 [2024-04-24 00:25:52.251184] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 113941 has claimed it. 00:11:58.832 request: 00:11:58.832 { 00:11:58.832 "method": "framework_enable_cpumask_locks", 00:11:58.832 "req_id": 1 00:11:58.832 } 00:11:58.832 Got JSON-RPC error response 00:11:58.832 response: 00:11:58.832 { 00:11:58.832 "code": -32603, 00:11:58.832 "message": "Failed to claim CPU core: 2" 00:11:58.832 } 00:11:58.832 00:25:52 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:11:58.832 00:25:52 -- common/autotest_common.sh@641 -- # es=1 00:11:58.832 00:25:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:58.832 00:25:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:58.832 00:25:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:58.832 00:25:52 -- event/cpu_locks.sh@158 -- # waitforlisten 113941 /var/tmp/spdk.sock 00:11:58.832 00:25:52 -- common/autotest_common.sh@817 -- # '[' -z 113941 ']' 00:11:58.832 00:25:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.832 00:25:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:58.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.832 00:25:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.832 00:25:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:58.832 00:25:52 -- common/autotest_common.sh@10 -- # set +x 00:11:58.832 00:25:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:58.832 00:25:52 -- common/autotest_common.sh@850 -- # return 0 00:11:58.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:58.832 00:25:52 -- event/cpu_locks.sh@159 -- # waitforlisten 113969 /var/tmp/spdk2.sock 00:11:58.832 00:25:52 -- common/autotest_common.sh@817 -- # '[' -z 113969 ']' 00:11:58.832 00:25:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:58.832 00:25:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:58.832 00:25:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:58.832 00:25:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:58.832 00:25:52 -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 00:25:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:59.396 00:25:52 -- common/autotest_common.sh@850 -- # return 0 00:11:59.396 00:25:52 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:59.396 00:25:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:59.396 00:25:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:59.396 00:25:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:59.396 00:11:59.396 real 0m5.407s 00:11:59.396 user 0m1.963s 00:11:59.396 sys 0m0.278s 00:11:59.396 00:25:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:59.396 00:25:52 -- common/autotest_common.sh@10 -- # set +x 00:11:59.396 ************************************ 00:11:59.396 END TEST locking_overlapped_coremask_via_rpc 00:11:59.396 ************************************ 00:11:59.396 00:25:52 -- event/cpu_locks.sh@174 -- # cleanup 00:11:59.396 00:25:52 -- event/cpu_locks.sh@15 -- # [[ -z 113941 ]] 00:11:59.396 00:25:52 -- event/cpu_locks.sh@15 -- # killprocess 113941 00:11:59.396 00:25:52 -- common/autotest_common.sh@936 -- # '[' -z 113941 ']' 00:11:59.396 00:25:52 -- common/autotest_common.sh@940 -- # kill -0 113941 00:11:59.396 00:25:52 -- common/autotest_common.sh@941 -- # uname 00:11:59.396 00:25:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:59.396 00:25:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113941 00:11:59.396 00:25:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:59.396 killing process with pid 113941 00:11:59.396 00:25:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:59.396 00:25:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113941' 00:11:59.396 00:25:52 -- common/autotest_common.sh@955 -- # kill 113941 00:11:59.396 00:25:52 -- common/autotest_common.sh@960 -- # wait 113941 00:12:02.681 00:25:55 -- event/cpu_locks.sh@16 -- # [[ -z 113969 ]] 00:12:02.681 00:25:55 -- event/cpu_locks.sh@16 -- # killprocess 113969 00:12:02.681 00:25:55 -- common/autotest_common.sh@936 -- # '[' -z 113969 ']' 00:12:02.681 00:25:55 -- common/autotest_common.sh@940 -- # kill -0 113969 00:12:02.681 00:25:55 -- common/autotest_common.sh@941 -- # uname 00:12:02.681 00:25:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:02.681 00:25:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113969 00:12:02.681 killing process with pid 113969 00:12:02.681 00:25:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:02.681 00:25:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:02.681 00:25:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113969' 00:12:02.681 00:25:55 -- common/autotest_common.sh@955 -- # kill 113969 00:12:02.681 00:25:55 -- common/autotest_common.sh@960 -- # wait 113969 00:12:05.225 00:25:58 -- event/cpu_locks.sh@18 -- # rm -f 00:12:05.225 00:25:58 -- event/cpu_locks.sh@1 -- # cleanup 00:12:05.225 00:25:58 -- event/cpu_locks.sh@15 -- # [[ -z 113941 ]] 00:12:05.225 00:25:58 -- event/cpu_locks.sh@15 -- # killprocess 113941 00:12:05.225 00:25:58 -- common/autotest_common.sh@936 -- # '[' -z 113941 ']' 00:12:05.225 00:25:58 -- common/autotest_common.sh@940 -- # kill -0 113941 00:12:05.225 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (113941) - No such process 00:12:05.225 Process with pid 113941 is not found 00:12:05.225 00:25:58 -- common/autotest_common.sh@963 -- # echo 'Process with pid 113941 is not found' 00:12:05.225 00:25:58 -- event/cpu_locks.sh@16 -- # [[ -z 113969 ]] 00:12:05.225 00:25:58 -- event/cpu_locks.sh@16 -- # killprocess 113969 00:12:05.225 00:25:58 -- common/autotest_common.sh@936 -- # '[' -z 113969 ']' 00:12:05.225 Process with pid 113969 is not found 00:12:05.225 00:25:58 -- common/autotest_common.sh@940 -- # kill -0 113969 00:12:05.225 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (113969) - No such process 00:12:05.225 00:25:58 -- common/autotest_common.sh@963 -- # echo 'Process with pid 113969 is not found' 00:12:05.225 00:25:58 -- event/cpu_locks.sh@18 -- # rm -f 00:12:05.225 ************************************ 00:12:05.225 END TEST cpu_locks 00:12:05.225 ************************************ 00:12:05.225 00:12:05.225 real 0m58.566s 00:12:05.225 user 1m40.997s 00:12:05.225 sys 0m7.583s 00:12:05.225 00:25:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:05.225 00:25:58 -- common/autotest_common.sh@10 -- # set +x 00:12:05.225 ************************************ 00:12:05.225 END TEST event 00:12:05.225 ************************************ 00:12:05.225 00:12:05.225 real 1m33.972s 00:12:05.225 user 2m48.926s 00:12:05.225 sys 0m12.430s 00:12:05.225 00:25:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:05.225 00:25:58 -- common/autotest_common.sh@10 -- # set +x 00:12:05.225 00:25:58 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:05.225 00:25:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:05.225 00:25:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:05.225 00:25:58 -- common/autotest_common.sh@10 -- # set +x 00:12:05.225 ************************************ 00:12:05.225 START TEST thread 00:12:05.225 ************************************ 00:12:05.225 00:25:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:05.225 * Looking for test storage... 00:12:05.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:05.225 00:25:58 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:05.225 00:25:58 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:12:05.225 00:25:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:05.225 00:25:58 -- common/autotest_common.sh@10 -- # set +x 00:12:05.225 ************************************ 00:12:05.225 START TEST thread_poller_perf 00:12:05.225 ************************************ 00:12:05.225 00:25:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:05.225 [2024-04-24 00:25:58.994823] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:05.225 [2024-04-24 00:25:58.995536] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114200 ] 00:12:05.560 [2024-04-24 00:25:59.160945] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.840 [2024-04-24 00:25:59.367259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.840 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:07.215 ====================================== 00:12:07.215 busy:2110135774 (cyc) 00:12:07.215 total_run_count: 345000 00:12:07.215 tsc_hz: 2100000000 (cyc) 00:12:07.215 ====================================== 00:12:07.215 poller_cost: 6116 (cyc), 2912 (nsec) 00:12:07.215 ************************************ 00:12:07.215 END TEST thread_poller_perf 00:12:07.215 ************************************ 00:12:07.215 00:12:07.215 real 0m1.904s 00:12:07.215 user 0m1.694s 00:12:07.215 sys 0m0.105s 00:12:07.215 00:26:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:07.215 00:26:00 -- common/autotest_common.sh@10 -- # set +x 00:12:07.215 00:26:00 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:07.215 00:26:00 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:12:07.215 00:26:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:07.215 00:26:00 -- common/autotest_common.sh@10 -- # set +x 00:12:07.215 ************************************ 00:12:07.215 START TEST thread_poller_perf 00:12:07.215 ************************************ 00:12:07.215 00:26:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:07.473 [2024-04-24 00:26:01.014993] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:07.473 [2024-04-24 00:26:01.015646] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114254 ] 00:12:07.473 [2024-04-24 00:26:01.197946] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.731 [2024-04-24 00:26:01.462503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.731 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:09.185 ====================================== 00:12:09.185 busy:2105178162 (cyc) 00:12:09.185 total_run_count: 4393000 00:12:09.185 tsc_hz: 2100000000 (cyc) 00:12:09.185 ====================================== 00:12:09.185 poller_cost: 479 (cyc), 228 (nsec) 00:12:09.185 ************************************ 00:12:09.185 END TEST thread_poller_perf 00:12:09.185 ************************************ 00:12:09.185 00:12:09.185 real 0m1.998s 00:12:09.185 user 0m1.754s 00:12:09.185 sys 0m0.142s 00:12:09.185 00:26:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:09.185 00:26:02 -- common/autotest_common.sh@10 -- # set +x 00:12:09.443 00:26:03 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:12:09.443 00:26:03 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:12:09.443 00:26:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:09.443 00:26:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.443 00:26:03 -- common/autotest_common.sh@10 -- # set +x 00:12:09.443 ************************************ 00:12:09.443 START TEST thread_spdk_lock 00:12:09.443 ************************************ 00:12:09.443 00:26:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:12:09.443 [2024-04-24 00:26:03.110727] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:09.443 [2024-04-24 00:26:03.110959] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114307 ] 00:12:09.701 [2024-04-24 00:26:03.297458] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:09.958 [2024-04-24 00:26:03.577189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.958 [2024-04-24 00:26:03.577188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.524 [2024-04-24 00:26:04.097023] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:10.524 [2024-04-24 00:26:04.097132] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:12:10.524 [2024-04-24 00:26:04.097164] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x55ee7fb4d240 00:12:10.524 [2024-04-24 00:26:04.107763] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:10.524 [2024-04-24 00:26:04.107865] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:10.524 [2024-04-24 00:26:04.107901] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:12:10.781 Starting test contend 00:12:10.781 Worker Delay Wait us Hold us Total us 00:12:10.781 0 3 123551 193563 317114 00:12:10.781 1 5 57140 297004 354145 00:12:10.781 PASS test contend 00:12:10.781 Starting test hold_by_poller 00:12:10.781 PASS test hold_by_poller 00:12:10.781 Starting test hold_by_message 00:12:10.781 PASS test hold_by_message 00:12:10.781 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:12:10.781 100014 assertions passed 00:12:10.781 0 assertions failed 00:12:10.781 ************************************ 00:12:10.781 END TEST thread_spdk_lock 00:12:10.781 ************************************ 00:12:10.781 00:12:10.781 real 0m1.503s 00:12:10.781 user 0m1.789s 00:12:10.781 sys 0m0.145s 00:12:10.781 00:26:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:10.781 00:26:04 -- common/autotest_common.sh@10 -- # set +x 00:12:11.039 00:12:11.039 real 0m5.799s 00:12:11.039 user 0m5.441s 00:12:11.039 sys 0m0.591s 00:12:11.039 00:26:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:11.039 00:26:04 -- common/autotest_common.sh@10 -- # set +x 00:12:11.039 ************************************ 00:12:11.039 END TEST thread 00:12:11.039 ************************************ 00:12:11.039 00:26:04 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:11.039 00:26:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:11.039 00:26:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:11.039 00:26:04 -- common/autotest_common.sh@10 -- # set +x 00:12:11.039 ************************************ 00:12:11.039 START TEST accel 00:12:11.039 ************************************ 00:12:11.039 00:26:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:11.039 * Looking for test storage... 00:12:11.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:11.039 00:26:04 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:12:11.039 00:26:04 -- accel/accel.sh@82 -- # get_expected_opcs 00:12:11.039 00:26:04 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:11.039 00:26:04 -- accel/accel.sh@62 -- # spdk_tgt_pid=114398 00:12:11.039 00:26:04 -- accel/accel.sh@63 -- # waitforlisten 114398 00:12:11.039 00:26:04 -- common/autotest_common.sh@817 -- # '[' -z 114398 ']' 00:12:11.039 00:26:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.039 00:26:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:11.039 00:26:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.039 00:26:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:11.039 00:26:04 -- common/autotest_common.sh@10 -- # set +x 00:12:11.039 00:26:04 -- accel/accel.sh@61 -- # build_accel_config 00:12:11.039 00:26:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:11.039 00:26:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:11.039 00:26:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:11.039 00:26:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:11.039 00:26:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:11.039 00:26:04 -- accel/accel.sh@40 -- # local IFS=, 00:12:11.039 00:26:04 -- accel/accel.sh@41 -- # jq -r . 00:12:11.039 00:26:04 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:12:11.297 [2024-04-24 00:26:04.880397] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:11.297 [2024-04-24 00:26:04.880582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114398 ] 00:12:11.297 [2024-04-24 00:26:05.062421] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.555 [2024-04-24 00:26:05.300498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.929 00:26:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:12.929 00:26:06 -- common/autotest_common.sh@850 -- # return 0 00:12:12.929 00:26:06 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:12:12.929 00:26:06 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:12:12.929 00:26:06 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:12:12.929 00:26:06 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:12:12.929 00:26:06 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:12:12.929 00:26:06 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:12:12.929 00:26:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.929 00:26:06 -- common/autotest_common.sh@10 -- # set +x 00:12:12.929 00:26:06 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:12:12.929 00:26:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # IFS== 00:12:12.929 00:26:06 -- accel/accel.sh@72 -- # read -r opc module 00:12:12.929 00:26:06 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:12.929 00:26:06 -- accel/accel.sh@75 -- # killprocess 114398 00:12:12.929 00:26:06 -- common/autotest_common.sh@936 -- # '[' -z 114398 ']' 00:12:12.929 00:26:06 -- common/autotest_common.sh@940 -- # kill -0 114398 00:12:12.929 00:26:06 -- common/autotest_common.sh@941 -- # uname 00:12:12.929 00:26:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:12.929 00:26:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114398 00:12:12.929 00:26:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:12.929 00:26:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:12.929 00:26:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114398' 00:12:12.929 killing process with pid 114398 00:12:12.929 00:26:06 -- common/autotest_common.sh@955 -- # kill 114398 00:12:12.929 00:26:06 -- common/autotest_common.sh@960 -- # wait 114398 00:12:15.458 00:26:09 -- accel/accel.sh@76 -- # trap - ERR 00:12:15.458 00:26:09 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:15.458 00:26:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:15.458 00:26:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:15.458 00:26:09 -- common/autotest_common.sh@10 -- # set +x 00:12:15.458 00:26:09 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:12:15.458 00:26:09 -- accel/accel.sh@12 -- # build_accel_config 00:12:15.458 00:26:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:15.458 00:26:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:15.458 00:26:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:15.458 00:26:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:15.458 00:26:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:15.458 00:26:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:15.458 00:26:09 -- accel/accel.sh@40 -- # local IFS=, 00:12:15.458 00:26:09 -- accel/accel.sh@41 -- # jq -r . 00:12:15.715 00:26:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:15.715 00:26:09 -- common/autotest_common.sh@10 -- # set +x 00:12:15.715 00:26:09 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:15.715 00:26:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:15.715 00:26:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:15.715 00:26:09 -- common/autotest_common.sh@10 -- # set +x 00:12:15.715 ************************************ 00:12:15.715 START TEST accel_missing_filename 00:12:15.715 ************************************ 00:12:15.715 00:26:09 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:12:15.715 00:26:09 -- common/autotest_common.sh@638 -- # local es=0 00:12:15.715 00:26:09 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:15.715 00:26:09 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:15.715 00:26:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:15.715 00:26:09 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:15.715 00:26:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:15.715 00:26:09 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:12:15.715 00:26:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:15.715 00:26:09 -- accel/accel.sh@12 -- # build_accel_config 00:12:15.715 00:26:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:15.715 00:26:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:15.715 00:26:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:15.715 00:26:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:15.715 00:26:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:15.715 00:26:09 -- accel/accel.sh@40 -- # local IFS=, 00:12:15.715 00:26:09 -- accel/accel.sh@41 -- # jq -r . 00:12:15.715 [2024-04-24 00:26:09.406835] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:15.715 [2024-04-24 00:26:09.407157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114502 ] 00:12:15.972 [2024-04-24 00:26:09.591582] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.230 [2024-04-24 00:26:09.877351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.500 [2024-04-24 00:26:10.128934] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:17.091 [2024-04-24 00:26:10.646518] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:12:17.349 A filename is required. 00:12:17.349 ************************************ 00:12:17.349 END TEST accel_missing_filename 00:12:17.349 ************************************ 00:12:17.349 00:26:11 -- common/autotest_common.sh@641 -- # es=234 00:12:17.349 00:26:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:17.349 00:26:11 -- common/autotest_common.sh@650 -- # es=106 00:12:17.349 00:26:11 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:17.349 00:26:11 -- common/autotest_common.sh@658 -- # es=1 00:12:17.349 00:26:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:17.349 00:12:17.349 real 0m1.715s 00:12:17.349 user 0m1.434s 00:12:17.349 sys 0m0.224s 00:12:17.349 00:26:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:17.349 00:26:11 -- common/autotest_common.sh@10 -- # set +x 00:12:17.349 00:26:11 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:17.349 00:26:11 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:12:17.349 00:26:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:17.349 00:26:11 -- common/autotest_common.sh@10 -- # set +x 00:12:17.607 ************************************ 00:12:17.607 START TEST accel_compress_verify 00:12:17.607 ************************************ 00:12:17.607 00:26:11 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:17.607 00:26:11 -- common/autotest_common.sh@638 -- # local es=0 00:12:17.607 00:26:11 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:17.607 00:26:11 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:17.607 00:26:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:17.607 00:26:11 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:17.607 00:26:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:17.607 00:26:11 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:17.607 00:26:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:17.607 00:26:11 -- accel/accel.sh@12 -- # build_accel_config 00:12:17.607 00:26:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:17.607 00:26:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:17.607 00:26:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:17.607 00:26:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:17.607 00:26:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:17.607 00:26:11 -- accel/accel.sh@40 -- # local IFS=, 00:12:17.607 00:26:11 -- accel/accel.sh@41 -- # jq -r . 00:12:17.607 [2024-04-24 00:26:11.228597] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:17.607 [2024-04-24 00:26:11.229143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114557 ] 00:12:17.884 [2024-04-24 00:26:11.420965] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.143 [2024-04-24 00:26:11.717206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.401 [2024-04-24 00:26:12.057737] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:18.986 [2024-04-24 00:26:12.614105] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:12:19.250 00:12:19.250 Compression does not support the verify option, aborting. 00:12:19.250 00:26:13 -- common/autotest_common.sh@641 -- # es=161 00:12:19.250 00:26:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:19.250 00:26:13 -- common/autotest_common.sh@650 -- # es=33 00:12:19.250 00:26:13 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:19.250 00:26:13 -- common/autotest_common.sh@658 -- # es=1 00:12:19.250 00:26:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:19.250 00:12:19.250 real 0m1.868s 00:12:19.250 user 0m1.573s 00:12:19.250 sys 0m0.247s 00:12:19.250 00:26:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:19.250 ************************************ 00:12:19.250 END TEST accel_compress_verify 00:12:19.250 00:26:13 -- common/autotest_common.sh@10 -- # set +x 00:12:19.250 ************************************ 00:12:19.508 00:26:13 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:19.508 00:26:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:19.508 00:26:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.508 00:26:13 -- common/autotest_common.sh@10 -- # set +x 00:12:19.508 ************************************ 00:12:19.508 START TEST accel_wrong_workload 00:12:19.508 ************************************ 00:12:19.508 00:26:13 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:12:19.508 00:26:13 -- common/autotest_common.sh@638 -- # local es=0 00:12:19.508 00:26:13 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:19.508 00:26:13 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:19.508 00:26:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.508 00:26:13 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:19.508 00:26:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.508 00:26:13 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:12:19.508 00:26:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:19.508 00:26:13 -- accel/accel.sh@12 -- # build_accel_config 00:12:19.508 00:26:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:19.508 00:26:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:19.508 00:26:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:19.508 00:26:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:19.508 00:26:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:19.508 00:26:13 -- accel/accel.sh@40 -- # local IFS=, 00:12:19.508 00:26:13 -- accel/accel.sh@41 -- # jq -r . 00:12:19.508 Unsupported workload type: foobar 00:12:19.508 [2024-04-24 00:26:13.169466] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:19.508 accel_perf options: 00:12:19.508 [-h help message] 00:12:19.508 [-q queue depth per core] 00:12:19.508 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:19.508 [-T number of threads per core 00:12:19.508 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:19.508 [-t time in seconds] 00:12:19.508 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:19.508 [ dif_verify, , dif_generate, dif_generate_copy 00:12:19.508 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:19.508 [-l for compress/decompress workloads, name of uncompressed input file 00:12:19.508 [-S for crc32c workload, use this seed value (default 0) 00:12:19.508 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:19.508 [-f for fill workload, use this BYTE value (default 255) 00:12:19.508 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:19.508 [-y verify result if this switch is on] 00:12:19.508 [-a tasks to allocate per core (default: same value as -q)] 00:12:19.508 Can be used to spread operations across a wider range of memory. 00:12:19.508 00:26:13 -- common/autotest_common.sh@641 -- # es=1 00:12:19.508 00:26:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:19.509 00:26:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:19.509 00:26:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:19.509 00:12:19.509 real 0m0.089s 00:12:19.509 user 0m0.099s 00:12:19.509 sys 0m0.051s 00:12:19.509 00:26:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:19.509 00:26:13 -- common/autotest_common.sh@10 -- # set +x 00:12:19.509 ************************************ 00:12:19.509 END TEST accel_wrong_workload 00:12:19.509 ************************************ 00:12:19.509 00:26:13 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:19.509 00:26:13 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:12:19.509 00:26:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.509 00:26:13 -- common/autotest_common.sh@10 -- # set +x 00:12:19.509 ************************************ 00:12:19.509 START TEST accel_negative_buffers 00:12:19.509 ************************************ 00:12:19.509 00:26:13 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:19.509 00:26:13 -- common/autotest_common.sh@638 -- # local es=0 00:12:19.509 00:26:13 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:19.509 00:26:13 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:19.509 00:26:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.509 00:26:13 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:19.509 00:26:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:19.509 00:26:13 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:12:19.509 00:26:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:19.509 00:26:13 -- accel/accel.sh@12 -- # build_accel_config 00:12:19.509 00:26:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:19.509 00:26:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:19.509 00:26:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:19.509 00:26:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:19.509 00:26:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:19.509 00:26:13 -- accel/accel.sh@40 -- # local IFS=, 00:12:19.767 00:26:13 -- accel/accel.sh@41 -- # jq -r . 00:12:19.767 -x option must be non-negative. 00:12:19.767 [2024-04-24 00:26:13.342932] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:19.767 accel_perf options: 00:12:19.767 [-h help message] 00:12:19.767 [-q queue depth per core] 00:12:19.767 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:19.767 [-T number of threads per core 00:12:19.767 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:19.767 [-t time in seconds] 00:12:19.767 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:19.767 [ dif_verify, , dif_generate, dif_generate_copy 00:12:19.767 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:19.767 [-l for compress/decompress workloads, name of uncompressed input file 00:12:19.767 [-S for crc32c workload, use this seed value (default 0) 00:12:19.767 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:19.767 [-f for fill workload, use this BYTE value (default 255) 00:12:19.767 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:19.767 [-y verify result if this switch is on] 00:12:19.767 [-a tasks to allocate per core (default: same value as -q)] 00:12:19.768 Can be used to spread operations across a wider range of memory. 00:12:19.768 00:26:13 -- common/autotest_common.sh@641 -- # es=1 00:12:19.768 00:26:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:19.768 00:26:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:19.768 00:26:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:19.768 00:12:19.768 real 0m0.086s 00:12:19.768 user 0m0.083s 00:12:19.768 sys 0m0.055s 00:12:19.768 00:26:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:19.768 00:26:13 -- common/autotest_common.sh@10 -- # set +x 00:12:19.768 ************************************ 00:12:19.768 END TEST accel_negative_buffers 00:12:19.768 ************************************ 00:12:19.768 00:26:13 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:19.768 00:26:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:19.768 00:26:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.768 00:26:13 -- common/autotest_common.sh@10 -- # set +x 00:12:19.768 ************************************ 00:12:19.768 START TEST accel_crc32c 00:12:19.768 ************************************ 00:12:19.768 00:26:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:19.768 00:26:13 -- accel/accel.sh@16 -- # local accel_opc 00:12:19.768 00:26:13 -- accel/accel.sh@17 -- # local accel_module 00:12:19.768 00:26:13 -- accel/accel.sh@19 -- # IFS=: 00:12:19.768 00:26:13 -- accel/accel.sh@19 -- # read -r var val 00:12:19.768 00:26:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:19.768 00:26:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:19.768 00:26:13 -- accel/accel.sh@12 -- # build_accel_config 00:12:19.768 00:26:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:19.768 00:26:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:19.768 00:26:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:19.768 00:26:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:19.768 00:26:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:19.768 00:26:13 -- accel/accel.sh@40 -- # local IFS=, 00:12:19.768 00:26:13 -- accel/accel.sh@41 -- # jq -r . 00:12:19.768 [2024-04-24 00:26:13.523080] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:19.768 [2024-04-24 00:26:13.523367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114666 ] 00:12:20.026 [2024-04-24 00:26:13.719936] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.284 [2024-04-24 00:26:13.985992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val= 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val= 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val=0x1 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val= 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val= 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val=crc32c 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val=32 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val= 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val=software 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@22 -- # accel_module=software 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val=32 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val=32 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val=1 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val=Yes 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val= 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:20.543 00:26:14 -- accel/accel.sh@20 -- # val= 00:12:20.543 00:26:14 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # IFS=: 00:12:20.543 00:26:14 -- accel/accel.sh@19 -- # read -r var val 00:12:22.447 00:26:16 -- accel/accel.sh@20 -- # val= 00:12:22.447 00:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.447 00:26:16 -- accel/accel.sh@19 -- # IFS=: 00:12:22.447 00:26:16 -- accel/accel.sh@19 -- # read -r var val 00:12:22.447 00:26:16 -- accel/accel.sh@20 -- # val= 00:12:22.447 00:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.447 00:26:16 -- accel/accel.sh@19 -- # IFS=: 00:12:22.447 00:26:16 -- accel/accel.sh@19 -- # read -r var val 00:12:22.447 00:26:16 -- accel/accel.sh@20 -- # val= 00:12:22.447 00:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.447 00:26:16 -- accel/accel.sh@19 -- # IFS=: 00:12:22.447 00:26:16 -- accel/accel.sh@19 -- # read -r var val 00:12:22.447 00:26:16 -- accel/accel.sh@20 -- # val= 00:12:22.447 00:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.447 00:26:16 -- accel/accel.sh@19 -- # IFS=: 00:12:22.447 00:26:16 -- accel/accel.sh@19 -- # read -r var val 00:12:22.447 00:26:16 -- accel/accel.sh@20 -- # val= 00:12:22.447 00:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.447 00:26:16 -- accel/accel.sh@19 -- # IFS=: 00:12:22.447 00:26:16 -- accel/accel.sh@19 -- # read -r var val 00:12:22.447 00:26:16 -- accel/accel.sh@20 -- # val= 00:12:22.447 00:26:16 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.447 00:26:16 -- accel/accel.sh@19 -- # IFS=: 00:12:22.447 00:26:16 -- accel/accel.sh@19 -- # read -r var val 00:12:22.447 00:26:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:22.447 00:26:16 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:22.447 00:26:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:22.447 00:12:22.447 real 0m2.690s 00:12:22.447 user 0m2.411s 00:12:22.447 sys 0m0.197s 00:12:22.447 00:26:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:22.447 00:26:16 -- common/autotest_common.sh@10 -- # set +x 00:12:22.447 ************************************ 00:12:22.447 END TEST accel_crc32c 00:12:22.447 ************************************ 00:12:22.447 00:26:16 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:22.447 00:26:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:22.447 00:26:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.447 00:26:16 -- common/autotest_common.sh@10 -- # set +x 00:12:22.705 ************************************ 00:12:22.705 START TEST accel_crc32c_C2 00:12:22.705 ************************************ 00:12:22.705 00:26:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:22.705 00:26:16 -- accel/accel.sh@16 -- # local accel_opc 00:12:22.705 00:26:16 -- accel/accel.sh@17 -- # local accel_module 00:12:22.705 00:26:16 -- accel/accel.sh@19 -- # IFS=: 00:12:22.706 00:26:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:22.706 00:26:16 -- accel/accel.sh@19 -- # read -r var val 00:12:22.706 00:26:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:22.706 00:26:16 -- accel/accel.sh@12 -- # build_accel_config 00:12:22.706 00:26:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:22.706 00:26:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:22.706 00:26:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:22.706 00:26:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:22.706 00:26:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:22.706 00:26:16 -- accel/accel.sh@40 -- # local IFS=, 00:12:22.706 00:26:16 -- accel/accel.sh@41 -- # jq -r . 00:12:22.706 [2024-04-24 00:26:16.311693] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:22.706 [2024-04-24 00:26:16.311990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114728 ] 00:12:22.964 [2024-04-24 00:26:16.504104] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.221 [2024-04-24 00:26:16.827373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val= 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val= 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val=0x1 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val= 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val= 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val=crc32c 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val=0 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val= 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val=software 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@22 -- # accel_module=software 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val=32 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val=32 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val=1 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val=Yes 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val= 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:23.479 00:26:17 -- accel/accel.sh@20 -- # val= 00:12:23.479 00:26:17 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # IFS=: 00:12:23.479 00:26:17 -- accel/accel.sh@19 -- # read -r var val 00:12:25.380 00:26:19 -- accel/accel.sh@20 -- # val= 00:12:25.380 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:25.380 00:26:19 -- accel/accel.sh@20 -- # val= 00:12:25.380 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:25.380 00:26:19 -- accel/accel.sh@20 -- # val= 00:12:25.380 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:25.380 00:26:19 -- accel/accel.sh@20 -- # val= 00:12:25.380 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:25.380 00:26:19 -- accel/accel.sh@20 -- # val= 00:12:25.380 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:25.380 00:26:19 -- accel/accel.sh@20 -- # val= 00:12:25.380 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:25.380 00:26:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:25.380 00:26:19 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:25.380 00:26:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:25.380 00:12:25.380 real 0m2.775s 00:12:25.380 user 0m2.476s 00:12:25.380 sys 0m0.217s 00:12:25.380 ************************************ 00:12:25.380 END TEST accel_crc32c_C2 00:12:25.380 ************************************ 00:12:25.380 00:26:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:25.380 00:26:19 -- common/autotest_common.sh@10 -- # set +x 00:12:25.380 00:26:19 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:25.380 00:26:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:25.380 00:26:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:25.380 00:26:19 -- common/autotest_common.sh@10 -- # set +x 00:12:25.380 ************************************ 00:12:25.380 START TEST accel_copy 00:12:25.380 ************************************ 00:12:25.380 00:26:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:12:25.380 00:26:19 -- accel/accel.sh@16 -- # local accel_opc 00:12:25.380 00:26:19 -- accel/accel.sh@17 -- # local accel_module 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:25.380 00:26:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:25.380 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:25.380 00:26:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:25.380 00:26:19 -- accel/accel.sh@12 -- # build_accel_config 00:12:25.380 00:26:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:25.380 00:26:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:25.380 00:26:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:25.380 00:26:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:25.380 00:26:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:25.380 00:26:19 -- accel/accel.sh@40 -- # local IFS=, 00:12:25.380 00:26:19 -- accel/accel.sh@41 -- # jq -r . 00:12:25.638 [2024-04-24 00:26:19.180064] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:25.638 [2024-04-24 00:26:19.180270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114797 ] 00:12:25.638 [2024-04-24 00:26:19.348069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.897 [2024-04-24 00:26:19.661866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val= 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val= 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val=0x1 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val= 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val= 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val=copy 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@23 -- # accel_opc=copy 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val= 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val=software 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@22 -- # accel_module=software 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val=32 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val=32 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val=1 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val=Yes 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val= 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:26.155 00:26:19 -- accel/accel.sh@20 -- # val= 00:12:26.155 00:26:19 -- accel/accel.sh@21 -- # case "$var" in 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # IFS=: 00:12:26.155 00:26:19 -- accel/accel.sh@19 -- # read -r var val 00:12:28.084 00:26:21 -- accel/accel.sh@20 -- # val= 00:12:28.084 00:26:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.084 00:26:21 -- accel/accel.sh@19 -- # IFS=: 00:12:28.084 00:26:21 -- accel/accel.sh@19 -- # read -r var val 00:12:28.084 00:26:21 -- accel/accel.sh@20 -- # val= 00:12:28.084 00:26:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.084 00:26:21 -- accel/accel.sh@19 -- # IFS=: 00:12:28.084 00:26:21 -- accel/accel.sh@19 -- # read -r var val 00:12:28.084 00:26:21 -- accel/accel.sh@20 -- # val= 00:12:28.084 00:26:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.084 00:26:21 -- accel/accel.sh@19 -- # IFS=: 00:12:28.084 00:26:21 -- accel/accel.sh@19 -- # read -r var val 00:12:28.084 00:26:21 -- accel/accel.sh@20 -- # val= 00:12:28.084 00:26:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.084 00:26:21 -- accel/accel.sh@19 -- # IFS=: 00:12:28.084 00:26:21 -- accel/accel.sh@19 -- # read -r var val 00:12:28.084 00:26:21 -- accel/accel.sh@20 -- # val= 00:12:28.084 00:26:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.084 00:26:21 -- accel/accel.sh@19 -- # IFS=: 00:12:28.084 00:26:21 -- accel/accel.sh@19 -- # read -r var val 00:12:28.084 00:26:21 -- accel/accel.sh@20 -- # val= 00:12:28.084 00:26:21 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.084 00:26:21 -- accel/accel.sh@19 -- # IFS=: 00:12:28.084 00:26:21 -- accel/accel.sh@19 -- # read -r var val 00:12:28.084 00:26:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:28.084 00:26:21 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:28.084 00:26:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:28.084 00:12:28.084 real 0m2.660s 00:12:28.084 user 0m2.399s 00:12:28.084 sys 0m0.186s 00:12:28.084 00:26:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:28.084 ************************************ 00:12:28.084 END TEST accel_copy 00:12:28.084 ************************************ 00:12:28.084 00:26:21 -- common/autotest_common.sh@10 -- # set +x 00:12:28.084 00:26:21 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:28.084 00:26:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:28.084 00:26:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.084 00:26:21 -- common/autotest_common.sh@10 -- # set +x 00:12:28.342 ************************************ 00:12:28.342 START TEST accel_fill 00:12:28.342 ************************************ 00:12:28.342 00:26:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:28.342 00:26:21 -- accel/accel.sh@16 -- # local accel_opc 00:12:28.343 00:26:21 -- accel/accel.sh@17 -- # local accel_module 00:12:28.343 00:26:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:28.343 00:26:21 -- accel/accel.sh@19 -- # IFS=: 00:12:28.343 00:26:21 -- accel/accel.sh@19 -- # read -r var val 00:12:28.343 00:26:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:28.343 00:26:21 -- accel/accel.sh@12 -- # build_accel_config 00:12:28.343 00:26:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:28.343 00:26:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:28.343 00:26:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:28.343 00:26:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:28.343 00:26:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:28.343 00:26:21 -- accel/accel.sh@40 -- # local IFS=, 00:12:28.343 00:26:21 -- accel/accel.sh@41 -- # jq -r . 00:12:28.343 [2024-04-24 00:26:21.937538] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:28.343 [2024-04-24 00:26:21.937708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114852 ] 00:12:28.343 [2024-04-24 00:26:22.107539] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.909 [2024-04-24 00:26:22.415016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val= 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val= 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val=0x1 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val= 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val= 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val=fill 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@23 -- # accel_opc=fill 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val=0x80 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val= 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val=software 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@22 -- # accel_module=software 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val=64 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val=64 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val=1 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val=Yes 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val= 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:29.168 00:26:22 -- accel/accel.sh@20 -- # val= 00:12:29.168 00:26:22 -- accel/accel.sh@21 -- # case "$var" in 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # IFS=: 00:12:29.168 00:26:22 -- accel/accel.sh@19 -- # read -r var val 00:12:31.069 00:26:24 -- accel/accel.sh@20 -- # val= 00:12:31.069 00:26:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # IFS=: 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # read -r var val 00:12:31.069 00:26:24 -- accel/accel.sh@20 -- # val= 00:12:31.069 00:26:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # IFS=: 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # read -r var val 00:12:31.069 00:26:24 -- accel/accel.sh@20 -- # val= 00:12:31.069 00:26:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # IFS=: 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # read -r var val 00:12:31.069 00:26:24 -- accel/accel.sh@20 -- # val= 00:12:31.069 00:26:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # IFS=: 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # read -r var val 00:12:31.069 00:26:24 -- accel/accel.sh@20 -- # val= 00:12:31.069 00:26:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # IFS=: 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # read -r var val 00:12:31.069 00:26:24 -- accel/accel.sh@20 -- # val= 00:12:31.069 00:26:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # IFS=: 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # read -r var val 00:12:31.069 00:26:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:31.069 00:26:24 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:31.069 00:26:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:31.069 00:12:31.069 real 0m2.721s 00:12:31.069 user 0m2.450s 00:12:31.069 sys 0m0.193s 00:12:31.069 00:26:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.069 ************************************ 00:12:31.069 END TEST accel_fill 00:12:31.069 ************************************ 00:12:31.069 00:26:24 -- common/autotest_common.sh@10 -- # set +x 00:12:31.069 00:26:24 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:31.069 00:26:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:31.069 00:26:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.069 00:26:24 -- common/autotest_common.sh@10 -- # set +x 00:12:31.069 ************************************ 00:12:31.069 START TEST accel_copy_crc32c 00:12:31.069 ************************************ 00:12:31.069 00:26:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:12:31.069 00:26:24 -- accel/accel.sh@16 -- # local accel_opc 00:12:31.069 00:26:24 -- accel/accel.sh@17 -- # local accel_module 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # IFS=: 00:12:31.069 00:26:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:31.069 00:26:24 -- accel/accel.sh@19 -- # read -r var val 00:12:31.069 00:26:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:31.069 00:26:24 -- accel/accel.sh@12 -- # build_accel_config 00:12:31.069 00:26:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:31.069 00:26:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:31.069 00:26:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:31.069 00:26:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:31.069 00:26:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:31.069 00:26:24 -- accel/accel.sh@40 -- # local IFS=, 00:12:31.069 00:26:24 -- accel/accel.sh@41 -- # jq -r . 00:12:31.069 [2024-04-24 00:26:24.767633] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:31.069 [2024-04-24 00:26:24.768405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114919 ] 00:12:31.327 [2024-04-24 00:26:24.943417] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.586 [2024-04-24 00:26:25.205904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val= 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val= 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val=0x1 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val= 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val= 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val=0 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val= 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val=software 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@22 -- # accel_module=software 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val=32 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val=32 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val=1 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val=Yes 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val= 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:31.844 00:26:25 -- accel/accel.sh@20 -- # val= 00:12:31.844 00:26:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # IFS=: 00:12:31.844 00:26:25 -- accel/accel.sh@19 -- # read -r var val 00:12:33.742 00:26:27 -- accel/accel.sh@20 -- # val= 00:12:33.742 00:26:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # IFS=: 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # read -r var val 00:12:33.742 00:26:27 -- accel/accel.sh@20 -- # val= 00:12:33.742 00:26:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # IFS=: 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # read -r var val 00:12:33.742 00:26:27 -- accel/accel.sh@20 -- # val= 00:12:33.742 00:26:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # IFS=: 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # read -r var val 00:12:33.742 00:26:27 -- accel/accel.sh@20 -- # val= 00:12:33.742 00:26:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # IFS=: 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # read -r var val 00:12:33.742 00:26:27 -- accel/accel.sh@20 -- # val= 00:12:33.742 00:26:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # IFS=: 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # read -r var val 00:12:33.742 00:26:27 -- accel/accel.sh@20 -- # val= 00:12:33.742 00:26:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # IFS=: 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # read -r var val 00:12:33.742 00:26:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:33.742 00:26:27 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:33.742 00:26:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:33.742 00:12:33.742 real 0m2.712s 00:12:33.742 user 0m2.442s 00:12:33.742 sys 0m0.183s 00:12:33.742 00:26:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:33.742 00:26:27 -- common/autotest_common.sh@10 -- # set +x 00:12:33.742 ************************************ 00:12:33.742 END TEST accel_copy_crc32c 00:12:33.742 ************************************ 00:12:33.742 00:26:27 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:33.742 00:26:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:33.742 00:26:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.742 00:26:27 -- common/autotest_common.sh@10 -- # set +x 00:12:33.742 ************************************ 00:12:33.742 START TEST accel_copy_crc32c_C2 00:12:33.742 ************************************ 00:12:33.742 00:26:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:33.742 00:26:27 -- accel/accel.sh@16 -- # local accel_opc 00:12:33.742 00:26:27 -- accel/accel.sh@17 -- # local accel_module 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # IFS=: 00:12:33.742 00:26:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:33.742 00:26:27 -- accel/accel.sh@19 -- # read -r var val 00:12:33.742 00:26:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:33.742 00:26:27 -- accel/accel.sh@12 -- # build_accel_config 00:12:33.742 00:26:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:33.742 00:26:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:33.742 00:26:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:33.742 00:26:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:33.742 00:26:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:33.742 00:26:27 -- accel/accel.sh@40 -- # local IFS=, 00:12:33.742 00:26:27 -- accel/accel.sh@41 -- # jq -r . 00:12:34.010 [2024-04-24 00:26:27.562285] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:34.010 [2024-04-24 00:26:27.562474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114987 ] 00:12:34.010 [2024-04-24 00:26:27.741271] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.268 [2024-04-24 00:26:28.056041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val= 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val= 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val=0x1 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val= 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val= 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val=0 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val= 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val=software 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@22 -- # accel_module=software 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val=32 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val=32 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val=1 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val=Yes 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val= 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:34.833 00:26:28 -- accel/accel.sh@20 -- # val= 00:12:34.833 00:26:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # IFS=: 00:12:34.833 00:26:28 -- accel/accel.sh@19 -- # read -r var val 00:12:36.731 00:26:30 -- accel/accel.sh@20 -- # val= 00:12:36.731 00:26:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # IFS=: 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # read -r var val 00:12:36.731 00:26:30 -- accel/accel.sh@20 -- # val= 00:12:36.731 00:26:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # IFS=: 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # read -r var val 00:12:36.731 00:26:30 -- accel/accel.sh@20 -- # val= 00:12:36.731 00:26:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # IFS=: 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # read -r var val 00:12:36.731 00:26:30 -- accel/accel.sh@20 -- # val= 00:12:36.731 00:26:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # IFS=: 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # read -r var val 00:12:36.731 00:26:30 -- accel/accel.sh@20 -- # val= 00:12:36.731 00:26:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # IFS=: 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # read -r var val 00:12:36.731 00:26:30 -- accel/accel.sh@20 -- # val= 00:12:36.731 00:26:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # IFS=: 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # read -r var val 00:12:36.731 00:26:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:36.731 00:26:30 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:36.731 00:26:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:36.731 00:12:36.731 real 0m2.820s 00:12:36.731 user 0m2.508s 00:12:36.731 sys 0m0.228s 00:12:36.731 00:26:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:36.731 ************************************ 00:12:36.731 END TEST accel_copy_crc32c_C2 00:12:36.731 ************************************ 00:12:36.731 00:26:30 -- common/autotest_common.sh@10 -- # set +x 00:12:36.731 00:26:30 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:36.731 00:26:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:36.731 00:26:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:36.731 00:26:30 -- common/autotest_common.sh@10 -- # set +x 00:12:36.731 ************************************ 00:12:36.731 START TEST accel_dualcast 00:12:36.731 ************************************ 00:12:36.731 00:26:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:12:36.731 00:26:30 -- accel/accel.sh@16 -- # local accel_opc 00:12:36.731 00:26:30 -- accel/accel.sh@17 -- # local accel_module 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # IFS=: 00:12:36.731 00:26:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:36.731 00:26:30 -- accel/accel.sh@19 -- # read -r var val 00:12:36.731 00:26:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:36.731 00:26:30 -- accel/accel.sh@12 -- # build_accel_config 00:12:36.731 00:26:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:36.731 00:26:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:36.731 00:26:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:36.731 00:26:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:36.731 00:26:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:36.731 00:26:30 -- accel/accel.sh@40 -- # local IFS=, 00:12:36.731 00:26:30 -- accel/accel.sh@41 -- # jq -r . 00:12:36.731 [2024-04-24 00:26:30.472626] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:36.732 [2024-04-24 00:26:30.472840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115050 ] 00:12:36.989 [2024-04-24 00:26:30.649016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.247 [2024-04-24 00:26:30.911087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val= 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val= 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val=0x1 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val= 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val= 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val=dualcast 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val= 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val=software 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@22 -- # accel_module=software 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val=32 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val=32 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val=1 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val=Yes 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val= 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:37.505 00:26:31 -- accel/accel.sh@20 -- # val= 00:12:37.505 00:26:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # IFS=: 00:12:37.505 00:26:31 -- accel/accel.sh@19 -- # read -r var val 00:12:40.043 00:26:33 -- accel/accel.sh@20 -- # val= 00:12:40.043 00:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # IFS=: 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # read -r var val 00:12:40.043 00:26:33 -- accel/accel.sh@20 -- # val= 00:12:40.043 00:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # IFS=: 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # read -r var val 00:12:40.043 00:26:33 -- accel/accel.sh@20 -- # val= 00:12:40.043 00:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # IFS=: 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # read -r var val 00:12:40.043 00:26:33 -- accel/accel.sh@20 -- # val= 00:12:40.043 00:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # IFS=: 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # read -r var val 00:12:40.043 00:26:33 -- accel/accel.sh@20 -- # val= 00:12:40.043 00:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # IFS=: 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # read -r var val 00:12:40.043 00:26:33 -- accel/accel.sh@20 -- # val= 00:12:40.043 00:26:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # IFS=: 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # read -r var val 00:12:40.043 00:26:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:40.043 00:26:33 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:40.043 00:26:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:40.043 00:12:40.043 real 0m2.881s 00:12:40.043 user 0m2.592s 00:12:40.043 sys 0m0.202s 00:12:40.043 00:26:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:40.043 ************************************ 00:12:40.043 END TEST accel_dualcast 00:12:40.043 ************************************ 00:12:40.043 00:26:33 -- common/autotest_common.sh@10 -- # set +x 00:12:40.043 00:26:33 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:40.043 00:26:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:40.043 00:26:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:40.043 00:26:33 -- common/autotest_common.sh@10 -- # set +x 00:12:40.043 ************************************ 00:12:40.043 START TEST accel_compare 00:12:40.043 ************************************ 00:12:40.043 00:26:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:12:40.043 00:26:33 -- accel/accel.sh@16 -- # local accel_opc 00:12:40.043 00:26:33 -- accel/accel.sh@17 -- # local accel_module 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # IFS=: 00:12:40.043 00:26:33 -- accel/accel.sh@19 -- # read -r var val 00:12:40.043 00:26:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:40.043 00:26:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:40.043 00:26:33 -- accel/accel.sh@12 -- # build_accel_config 00:12:40.043 00:26:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:40.043 00:26:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:40.043 00:26:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:40.043 00:26:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:40.043 00:26:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:40.043 00:26:33 -- accel/accel.sh@40 -- # local IFS=, 00:12:40.043 00:26:33 -- accel/accel.sh@41 -- # jq -r . 00:12:40.043 [2024-04-24 00:26:33.440469] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:40.043 [2024-04-24 00:26:33.440670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115117 ] 00:12:40.043 [2024-04-24 00:26:33.616648] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.317 [2024-04-24 00:26:33.955858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.576 00:26:34 -- accel/accel.sh@20 -- # val= 00:12:40.576 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.576 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.576 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.576 00:26:34 -- accel/accel.sh@20 -- # val= 00:12:40.576 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.576 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.576 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.576 00:26:34 -- accel/accel.sh@20 -- # val=0x1 00:12:40.576 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.576 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.576 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.576 00:26:34 -- accel/accel.sh@20 -- # val= 00:12:40.576 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.576 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.576 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.576 00:26:34 -- accel/accel.sh@20 -- # val= 00:12:40.576 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.576 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.576 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.576 00:26:34 -- accel/accel.sh@20 -- # val=compare 00:12:40.576 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.577 00:26:34 -- accel/accel.sh@23 -- # accel_opc=compare 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.577 00:26:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:40.577 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.577 00:26:34 -- accel/accel.sh@20 -- # val= 00:12:40.577 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.577 00:26:34 -- accel/accel.sh@20 -- # val=software 00:12:40.577 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.577 00:26:34 -- accel/accel.sh@22 -- # accel_module=software 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.577 00:26:34 -- accel/accel.sh@20 -- # val=32 00:12:40.577 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.577 00:26:34 -- accel/accel.sh@20 -- # val=32 00:12:40.577 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.577 00:26:34 -- accel/accel.sh@20 -- # val=1 00:12:40.577 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.577 00:26:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:40.577 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.577 00:26:34 -- accel/accel.sh@20 -- # val=Yes 00:12:40.577 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.577 00:26:34 -- accel/accel.sh@20 -- # val= 00:12:40.577 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:40.577 00:26:34 -- accel/accel.sh@20 -- # val= 00:12:40.577 00:26:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # IFS=: 00:12:40.577 00:26:34 -- accel/accel.sh@19 -- # read -r var val 00:12:42.539 00:26:36 -- accel/accel.sh@20 -- # val= 00:12:42.539 00:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.539 00:26:36 -- accel/accel.sh@19 -- # IFS=: 00:12:42.539 00:26:36 -- accel/accel.sh@19 -- # read -r var val 00:12:42.539 00:26:36 -- accel/accel.sh@20 -- # val= 00:12:42.539 00:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.539 00:26:36 -- accel/accel.sh@19 -- # IFS=: 00:12:42.539 00:26:36 -- accel/accel.sh@19 -- # read -r var val 00:12:42.539 00:26:36 -- accel/accel.sh@20 -- # val= 00:12:42.539 00:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.539 00:26:36 -- accel/accel.sh@19 -- # IFS=: 00:12:42.539 00:26:36 -- accel/accel.sh@19 -- # read -r var val 00:12:42.539 00:26:36 -- accel/accel.sh@20 -- # val= 00:12:42.539 00:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.539 00:26:36 -- accel/accel.sh@19 -- # IFS=: 00:12:42.539 00:26:36 -- accel/accel.sh@19 -- # read -r var val 00:12:42.539 00:26:36 -- accel/accel.sh@20 -- # val= 00:12:42.539 00:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.539 00:26:36 -- accel/accel.sh@19 -- # IFS=: 00:12:42.539 00:26:36 -- accel/accel.sh@19 -- # read -r var val 00:12:42.539 00:26:36 -- accel/accel.sh@20 -- # val= 00:12:42.539 00:26:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.539 00:26:36 -- accel/accel.sh@19 -- # IFS=: 00:12:42.539 00:26:36 -- accel/accel.sh@19 -- # read -r var val 00:12:42.539 00:26:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:42.539 00:26:36 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:42.539 00:26:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:42.539 00:12:42.539 real 0m2.921s 00:12:42.539 user 0m2.601s 00:12:42.539 sys 0m0.265s 00:12:42.539 00:26:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:42.539 00:26:36 -- common/autotest_common.sh@10 -- # set +x 00:12:42.539 ************************************ 00:12:42.539 END TEST accel_compare 00:12:42.539 ************************************ 00:12:42.797 00:26:36 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:42.797 00:26:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:42.797 00:26:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:42.797 00:26:36 -- common/autotest_common.sh@10 -- # set +x 00:12:42.797 ************************************ 00:12:42.797 START TEST accel_xor 00:12:42.797 ************************************ 00:12:42.797 00:26:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:12:42.797 00:26:36 -- accel/accel.sh@16 -- # local accel_opc 00:12:42.797 00:26:36 -- accel/accel.sh@17 -- # local accel_module 00:12:42.797 00:26:36 -- accel/accel.sh@19 -- # IFS=: 00:12:42.797 00:26:36 -- accel/accel.sh@19 -- # read -r var val 00:12:42.797 00:26:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:42.797 00:26:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:42.797 00:26:36 -- accel/accel.sh@12 -- # build_accel_config 00:12:42.797 00:26:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:42.797 00:26:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:42.797 00:26:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:42.797 00:26:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:42.797 00:26:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:42.797 00:26:36 -- accel/accel.sh@40 -- # local IFS=, 00:12:42.797 00:26:36 -- accel/accel.sh@41 -- # jq -r . 00:12:42.797 [2024-04-24 00:26:36.446299] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:42.797 [2024-04-24 00:26:36.446461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115179 ] 00:12:43.054 [2024-04-24 00:26:36.607678] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.312 [2024-04-24 00:26:36.848331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val= 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val= 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val=0x1 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val= 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val= 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val=xor 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@23 -- # accel_opc=xor 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val=2 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val= 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val=software 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@22 -- # accel_module=software 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val=32 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val=32 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val=1 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val=Yes 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val= 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:43.571 00:26:37 -- accel/accel.sh@20 -- # val= 00:12:43.571 00:26:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # IFS=: 00:12:43.571 00:26:37 -- accel/accel.sh@19 -- # read -r var val 00:12:45.474 00:26:38 -- accel/accel.sh@20 -- # val= 00:12:45.474 00:26:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.474 00:26:38 -- accel/accel.sh@19 -- # IFS=: 00:12:45.474 00:26:38 -- accel/accel.sh@19 -- # read -r var val 00:12:45.474 00:26:38 -- accel/accel.sh@20 -- # val= 00:12:45.474 00:26:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.474 00:26:38 -- accel/accel.sh@19 -- # IFS=: 00:12:45.474 00:26:38 -- accel/accel.sh@19 -- # read -r var val 00:12:45.474 00:26:38 -- accel/accel.sh@20 -- # val= 00:12:45.474 00:26:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.474 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:45.474 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:45.474 00:26:39 -- accel/accel.sh@20 -- # val= 00:12:45.474 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.474 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:45.474 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:45.474 00:26:39 -- accel/accel.sh@20 -- # val= 00:12:45.474 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.474 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:45.474 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:45.474 00:26:39 -- accel/accel.sh@20 -- # val= 00:12:45.474 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.474 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:45.474 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:45.474 00:26:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:45.474 00:26:39 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:45.474 00:26:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:45.474 00:12:45.474 real 0m2.615s 00:12:45.474 user 0m2.379s 00:12:45.474 sys 0m0.165s 00:12:45.474 00:26:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:45.474 00:26:39 -- common/autotest_common.sh@10 -- # set +x 00:12:45.474 ************************************ 00:12:45.474 END TEST accel_xor 00:12:45.474 ************************************ 00:12:45.474 00:26:39 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:45.474 00:26:39 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:45.474 00:26:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.474 00:26:39 -- common/autotest_common.sh@10 -- # set +x 00:12:45.474 ************************************ 00:12:45.474 START TEST accel_xor 00:12:45.474 ************************************ 00:12:45.474 00:26:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:12:45.474 00:26:39 -- accel/accel.sh@16 -- # local accel_opc 00:12:45.474 00:26:39 -- accel/accel.sh@17 -- # local accel_module 00:12:45.474 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:45.474 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:45.474 00:26:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:45.474 00:26:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:45.474 00:26:39 -- accel/accel.sh@12 -- # build_accel_config 00:12:45.474 00:26:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:45.474 00:26:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:45.474 00:26:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:45.474 00:26:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:45.474 00:26:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:45.474 00:26:39 -- accel/accel.sh@40 -- # local IFS=, 00:12:45.474 00:26:39 -- accel/accel.sh@41 -- # jq -r . 00:12:45.474 [2024-04-24 00:26:39.158962] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:45.474 [2024-04-24 00:26:39.159112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115241 ] 00:12:45.731 [2024-04-24 00:26:39.325312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.988 [2024-04-24 00:26:39.581012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.246 00:26:39 -- accel/accel.sh@20 -- # val= 00:12:46.246 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val= 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val=0x1 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val= 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val= 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val=xor 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@23 -- # accel_opc=xor 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val=3 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val= 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val=software 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@22 -- # accel_module=software 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val=32 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val=32 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val=1 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val=Yes 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val= 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:46.247 00:26:39 -- accel/accel.sh@20 -- # val= 00:12:46.247 00:26:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # IFS=: 00:12:46.247 00:26:39 -- accel/accel.sh@19 -- # read -r var val 00:12:48.147 00:26:41 -- accel/accel.sh@20 -- # val= 00:12:48.147 00:26:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # IFS=: 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # read -r var val 00:12:48.147 00:26:41 -- accel/accel.sh@20 -- # val= 00:12:48.147 00:26:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # IFS=: 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # read -r var val 00:12:48.147 00:26:41 -- accel/accel.sh@20 -- # val= 00:12:48.147 00:26:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # IFS=: 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # read -r var val 00:12:48.147 00:26:41 -- accel/accel.sh@20 -- # val= 00:12:48.147 00:26:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # IFS=: 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # read -r var val 00:12:48.147 00:26:41 -- accel/accel.sh@20 -- # val= 00:12:48.147 00:26:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # IFS=: 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # read -r var val 00:12:48.147 00:26:41 -- accel/accel.sh@20 -- # val= 00:12:48.147 00:26:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # IFS=: 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # read -r var val 00:12:48.147 00:26:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:48.147 00:26:41 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:48.147 00:26:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:48.147 00:12:48.147 real 0m2.669s 00:12:48.147 user 0m2.406s 00:12:48.147 sys 0m0.192s 00:12:48.147 00:26:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:48.147 00:26:41 -- common/autotest_common.sh@10 -- # set +x 00:12:48.147 ************************************ 00:12:48.147 END TEST accel_xor 00:12:48.147 ************************************ 00:12:48.147 00:26:41 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:48.147 00:26:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:48.147 00:26:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:48.147 00:26:41 -- common/autotest_common.sh@10 -- # set +x 00:12:48.147 ************************************ 00:12:48.147 START TEST accel_dif_verify 00:12:48.147 ************************************ 00:12:48.147 00:26:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:12:48.147 00:26:41 -- accel/accel.sh@16 -- # local accel_opc 00:12:48.147 00:26:41 -- accel/accel.sh@17 -- # local accel_module 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # IFS=: 00:12:48.147 00:26:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:48.147 00:26:41 -- accel/accel.sh@19 -- # read -r var val 00:12:48.147 00:26:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:48.147 00:26:41 -- accel/accel.sh@12 -- # build_accel_config 00:12:48.147 00:26:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:48.147 00:26:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:48.147 00:26:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:48.147 00:26:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:48.147 00:26:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:48.147 00:26:41 -- accel/accel.sh@40 -- # local IFS=, 00:12:48.147 00:26:41 -- accel/accel.sh@41 -- # jq -r . 00:12:48.147 [2024-04-24 00:26:41.924664] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:48.147 [2024-04-24 00:26:41.924945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115301 ] 00:12:48.405 [2024-04-24 00:26:42.108636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.662 [2024-04-24 00:26:42.354403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val= 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val= 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val=0x1 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val= 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val= 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val=dif_verify 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val='512 bytes' 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val='8 bytes' 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val= 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val=software 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@22 -- # accel_module=software 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val=32 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val=32 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val=1 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val=No 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val= 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:48.930 00:26:42 -- accel/accel.sh@20 -- # val= 00:12:48.930 00:26:42 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # IFS=: 00:12:48.930 00:26:42 -- accel/accel.sh@19 -- # read -r var val 00:12:50.857 00:26:44 -- accel/accel.sh@20 -- # val= 00:12:50.857 00:26:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # IFS=: 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # read -r var val 00:12:50.857 00:26:44 -- accel/accel.sh@20 -- # val= 00:12:50.857 00:26:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # IFS=: 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # read -r var val 00:12:50.857 00:26:44 -- accel/accel.sh@20 -- # val= 00:12:50.857 00:26:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # IFS=: 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # read -r var val 00:12:50.857 00:26:44 -- accel/accel.sh@20 -- # val= 00:12:50.857 00:26:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # IFS=: 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # read -r var val 00:12:50.857 00:26:44 -- accel/accel.sh@20 -- # val= 00:12:50.857 00:26:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # IFS=: 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # read -r var val 00:12:50.857 00:26:44 -- accel/accel.sh@20 -- # val= 00:12:50.857 00:26:44 -- accel/accel.sh@21 -- # case "$var" in 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # IFS=: 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # read -r var val 00:12:50.857 00:26:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:50.857 00:26:44 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:50.857 ************************************ 00:12:50.857 END TEST accel_dif_verify 00:12:50.857 ************************************ 00:12:50.857 00:26:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:50.857 00:12:50.857 real 0m2.633s 00:12:50.857 user 0m2.397s 00:12:50.857 sys 0m0.176s 00:12:50.857 00:26:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:50.857 00:26:44 -- common/autotest_common.sh@10 -- # set +x 00:12:50.857 00:26:44 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:50.857 00:26:44 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:50.857 00:26:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:50.857 00:26:44 -- common/autotest_common.sh@10 -- # set +x 00:12:50.857 ************************************ 00:12:50.857 START TEST accel_dif_generate 00:12:50.857 ************************************ 00:12:50.857 00:26:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:12:50.857 00:26:44 -- accel/accel.sh@16 -- # local accel_opc 00:12:50.857 00:26:44 -- accel/accel.sh@17 -- # local accel_module 00:12:50.857 00:26:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # IFS=: 00:12:50.857 00:26:44 -- accel/accel.sh@19 -- # read -r var val 00:12:50.857 00:26:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:50.857 00:26:44 -- accel/accel.sh@12 -- # build_accel_config 00:12:50.857 00:26:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:50.857 00:26:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:50.857 00:26:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:50.857 00:26:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:50.857 00:26:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:50.857 00:26:44 -- accel/accel.sh@40 -- # local IFS=, 00:12:50.857 00:26:44 -- accel/accel.sh@41 -- # jq -r . 00:12:51.116 [2024-04-24 00:26:44.658140] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:51.116 [2024-04-24 00:26:44.658334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115368 ] 00:12:51.116 [2024-04-24 00:26:44.839458] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.374 [2024-04-24 00:26:45.110685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.632 00:26:45 -- accel/accel.sh@20 -- # val= 00:12:51.632 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.632 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.632 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.632 00:26:45 -- accel/accel.sh@20 -- # val= 00:12:51.632 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.632 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.632 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.632 00:26:45 -- accel/accel.sh@20 -- # val=0x1 00:12:51.632 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.632 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.632 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val= 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val= 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val=dif_generate 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val='512 bytes' 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val='8 bytes' 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val= 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val=software 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@22 -- # accel_module=software 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val=32 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val=32 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val=1 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val=No 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val= 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:51.633 00:26:45 -- accel/accel.sh@20 -- # val= 00:12:51.633 00:26:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # IFS=: 00:12:51.633 00:26:45 -- accel/accel.sh@19 -- # read -r var val 00:12:53.532 00:26:47 -- accel/accel.sh@20 -- # val= 00:12:53.532 00:26:47 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.532 00:26:47 -- accel/accel.sh@19 -- # IFS=: 00:12:53.532 00:26:47 -- accel/accel.sh@19 -- # read -r var val 00:12:53.532 00:26:47 -- accel/accel.sh@20 -- # val= 00:12:53.532 00:26:47 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.532 00:26:47 -- accel/accel.sh@19 -- # IFS=: 00:12:53.532 00:26:47 -- accel/accel.sh@19 -- # read -r var val 00:12:53.532 00:26:47 -- accel/accel.sh@20 -- # val= 00:12:53.532 00:26:47 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.532 00:26:47 -- accel/accel.sh@19 -- # IFS=: 00:12:53.532 00:26:47 -- accel/accel.sh@19 -- # read -r var val 00:12:53.532 00:26:47 -- accel/accel.sh@20 -- # val= 00:12:53.532 00:26:47 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.532 00:26:47 -- accel/accel.sh@19 -- # IFS=: 00:12:53.532 00:26:47 -- accel/accel.sh@19 -- # read -r var val 00:12:53.532 00:26:47 -- accel/accel.sh@20 -- # val= 00:12:53.532 00:26:47 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.532 00:26:47 -- accel/accel.sh@19 -- # IFS=: 00:12:53.532 00:26:47 -- accel/accel.sh@19 -- # read -r var val 00:12:53.532 00:26:47 -- accel/accel.sh@20 -- # val= 00:12:53.532 00:26:47 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.532 00:26:47 -- accel/accel.sh@19 -- # IFS=: 00:12:53.532 00:26:47 -- accel/accel.sh@19 -- # read -r var val 00:12:53.532 00:26:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:53.532 00:26:47 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:53.532 00:26:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:53.532 00:12:53.532 real 0m2.641s 00:12:53.532 user 0m2.335s 00:12:53.532 sys 0m0.237s 00:12:53.532 00:26:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:53.532 00:26:47 -- common/autotest_common.sh@10 -- # set +x 00:12:53.532 ************************************ 00:12:53.532 END TEST accel_dif_generate 00:12:53.532 ************************************ 00:12:53.532 00:26:47 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:53.532 00:26:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:53.532 00:26:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:53.532 00:26:47 -- common/autotest_common.sh@10 -- # set +x 00:12:53.818 ************************************ 00:12:53.818 START TEST accel_dif_generate_copy 00:12:53.818 ************************************ 00:12:53.818 00:26:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:12:53.818 00:26:47 -- accel/accel.sh@16 -- # local accel_opc 00:12:53.818 00:26:47 -- accel/accel.sh@17 -- # local accel_module 00:12:53.818 00:26:47 -- accel/accel.sh@19 -- # IFS=: 00:12:53.818 00:26:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:53.818 00:26:47 -- accel/accel.sh@19 -- # read -r var val 00:12:53.818 00:26:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:53.818 00:26:47 -- accel/accel.sh@12 -- # build_accel_config 00:12:53.818 00:26:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:53.818 00:26:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:53.818 00:26:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:53.818 00:26:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:53.818 00:26:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:53.818 00:26:47 -- accel/accel.sh@40 -- # local IFS=, 00:12:53.818 00:26:47 -- accel/accel.sh@41 -- # jq -r . 00:12:53.818 [2024-04-24 00:26:47.387211] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:53.818 [2024-04-24 00:26:47.387347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115423 ] 00:12:53.818 [2024-04-24 00:26:47.547352] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.076 [2024-04-24 00:26:47.795486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val= 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val= 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val=0x1 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val= 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val= 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val= 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val=software 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@22 -- # accel_module=software 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val=32 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val=32 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val=1 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val=No 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val= 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:54.334 00:26:48 -- accel/accel.sh@20 -- # val= 00:12:54.334 00:26:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # IFS=: 00:12:54.334 00:26:48 -- accel/accel.sh@19 -- # read -r var val 00:12:56.326 00:26:49 -- accel/accel.sh@20 -- # val= 00:12:56.326 00:26:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.326 00:26:49 -- accel/accel.sh@19 -- # IFS=: 00:12:56.326 00:26:49 -- accel/accel.sh@19 -- # read -r var val 00:12:56.326 00:26:49 -- accel/accel.sh@20 -- # val= 00:12:56.326 00:26:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.326 00:26:49 -- accel/accel.sh@19 -- # IFS=: 00:12:56.326 00:26:49 -- accel/accel.sh@19 -- # read -r var val 00:12:56.326 00:26:49 -- accel/accel.sh@20 -- # val= 00:12:56.326 00:26:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.326 00:26:49 -- accel/accel.sh@19 -- # IFS=: 00:12:56.326 00:26:49 -- accel/accel.sh@19 -- # read -r var val 00:12:56.326 00:26:49 -- accel/accel.sh@20 -- # val= 00:12:56.326 00:26:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.326 00:26:49 -- accel/accel.sh@19 -- # IFS=: 00:12:56.326 00:26:49 -- accel/accel.sh@19 -- # read -r var val 00:12:56.326 00:26:49 -- accel/accel.sh@20 -- # val= 00:12:56.326 00:26:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.326 00:26:49 -- accel/accel.sh@19 -- # IFS=: 00:12:56.326 00:26:49 -- accel/accel.sh@19 -- # read -r var val 00:12:56.326 00:26:49 -- accel/accel.sh@20 -- # val= 00:12:56.327 00:26:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.327 00:26:49 -- accel/accel.sh@19 -- # IFS=: 00:12:56.327 00:26:49 -- accel/accel.sh@19 -- # read -r var val 00:12:56.327 00:26:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:56.327 00:26:49 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:12:56.327 00:26:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:56.327 00:12:56.327 real 0m2.616s 00:12:56.327 user 0m2.336s 00:12:56.327 sys 0m0.215s 00:12:56.327 00:26:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:56.327 ************************************ 00:12:56.327 00:26:49 -- common/autotest_common.sh@10 -- # set +x 00:12:56.327 END TEST accel_dif_generate_copy 00:12:56.327 ************************************ 00:12:56.327 00:26:50 -- accel/accel.sh@115 -- # [[ y == y ]] 00:12:56.327 00:26:50 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:56.327 00:26:50 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:12:56.327 00:26:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:56.327 00:26:50 -- common/autotest_common.sh@10 -- # set +x 00:12:56.327 ************************************ 00:12:56.327 START TEST accel_comp 00:12:56.327 ************************************ 00:12:56.327 00:26:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:56.327 00:26:50 -- accel/accel.sh@16 -- # local accel_opc 00:12:56.327 00:26:50 -- accel/accel.sh@17 -- # local accel_module 00:12:56.327 00:26:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:56.327 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:56.327 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:56.327 00:26:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:56.327 00:26:50 -- accel/accel.sh@12 -- # build_accel_config 00:12:56.327 00:26:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:56.327 00:26:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:56.327 00:26:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:56.327 00:26:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:56.327 00:26:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:56.327 00:26:50 -- accel/accel.sh@40 -- # local IFS=, 00:12:56.327 00:26:50 -- accel/accel.sh@41 -- # jq -r . 00:12:56.327 [2024-04-24 00:26:50.108409] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:56.327 [2024-04-24 00:26:50.109128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115494 ] 00:12:56.585 [2024-04-24 00:26:50.282337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.843 [2024-04-24 00:26:50.603143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val= 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val= 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val= 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val=0x1 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val= 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val= 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val=compress 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@23 -- # accel_opc=compress 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val= 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val=software 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@22 -- # accel_module=software 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val=32 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val=32 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val=1 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val=No 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val= 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:57.411 00:26:50 -- accel/accel.sh@20 -- # val= 00:12:57.411 00:26:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # IFS=: 00:12:57.411 00:26:50 -- accel/accel.sh@19 -- # read -r var val 00:12:59.312 00:26:52 -- accel/accel.sh@20 -- # val= 00:12:59.312 00:26:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # IFS=: 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # read -r var val 00:12:59.312 00:26:52 -- accel/accel.sh@20 -- # val= 00:12:59.312 00:26:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # IFS=: 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # read -r var val 00:12:59.312 00:26:52 -- accel/accel.sh@20 -- # val= 00:12:59.312 00:26:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # IFS=: 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # read -r var val 00:12:59.312 00:26:52 -- accel/accel.sh@20 -- # val= 00:12:59.312 00:26:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # IFS=: 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # read -r var val 00:12:59.312 00:26:52 -- accel/accel.sh@20 -- # val= 00:12:59.312 00:26:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # IFS=: 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # read -r var val 00:12:59.312 00:26:52 -- accel/accel.sh@20 -- # val= 00:12:59.312 00:26:52 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # IFS=: 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # read -r var val 00:12:59.312 00:26:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:59.312 00:26:52 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:12:59.312 00:26:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:59.312 00:12:59.312 real 0m2.775s 00:12:59.312 user 0m2.539s 00:12:59.312 sys 0m0.192s 00:12:59.312 00:26:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:59.312 00:26:52 -- common/autotest_common.sh@10 -- # set +x 00:12:59.312 ************************************ 00:12:59.312 END TEST accel_comp 00:12:59.312 ************************************ 00:12:59.312 00:26:52 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:59.312 00:26:52 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:59.312 00:26:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.312 00:26:52 -- common/autotest_common.sh@10 -- # set +x 00:12:59.312 ************************************ 00:12:59.312 START TEST accel_decomp 00:12:59.312 ************************************ 00:12:59.312 00:26:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:59.312 00:26:52 -- accel/accel.sh@16 -- # local accel_opc 00:12:59.312 00:26:52 -- accel/accel.sh@17 -- # local accel_module 00:12:59.312 00:26:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # IFS=: 00:12:59.312 00:26:52 -- accel/accel.sh@19 -- # read -r var val 00:12:59.312 00:26:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:59.312 00:26:52 -- accel/accel.sh@12 -- # build_accel_config 00:12:59.312 00:26:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:59.312 00:26:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:59.312 00:26:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:59.312 00:26:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:59.312 00:26:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:59.312 00:26:52 -- accel/accel.sh@40 -- # local IFS=, 00:12:59.312 00:26:52 -- accel/accel.sh@41 -- # jq -r . 00:12:59.312 [2024-04-24 00:26:52.979943] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:12:59.312 [2024-04-24 00:26:52.980541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115554 ] 00:12:59.570 [2024-04-24 00:26:53.152310] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.828 [2024-04-24 00:26:53.462481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.123 00:26:53 -- accel/accel.sh@20 -- # val= 00:13:00.123 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.123 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.123 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.123 00:26:53 -- accel/accel.sh@20 -- # val= 00:13:00.123 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.123 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.123 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.123 00:26:53 -- accel/accel.sh@20 -- # val= 00:13:00.123 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.123 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.123 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.123 00:26:53 -- accel/accel.sh@20 -- # val=0x1 00:13:00.123 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.123 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.123 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.123 00:26:53 -- accel/accel.sh@20 -- # val= 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.124 00:26:53 -- accel/accel.sh@20 -- # val= 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.124 00:26:53 -- accel/accel.sh@20 -- # val=decompress 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.124 00:26:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.124 00:26:53 -- accel/accel.sh@20 -- # val= 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.124 00:26:53 -- accel/accel.sh@20 -- # val=software 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@22 -- # accel_module=software 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.124 00:26:53 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.124 00:26:53 -- accel/accel.sh@20 -- # val=32 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.124 00:26:53 -- accel/accel.sh@20 -- # val=32 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.124 00:26:53 -- accel/accel.sh@20 -- # val=1 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.124 00:26:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.124 00:26:53 -- accel/accel.sh@20 -- # val=Yes 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.124 00:26:53 -- accel/accel.sh@20 -- # val= 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:00.124 00:26:53 -- accel/accel.sh@20 -- # val= 00:13:00.124 00:26:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # IFS=: 00:13:00.124 00:26:53 -- accel/accel.sh@19 -- # read -r var val 00:13:02.027 00:26:55 -- accel/accel.sh@20 -- # val= 00:13:02.027 00:26:55 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # IFS=: 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # read -r var val 00:13:02.027 00:26:55 -- accel/accel.sh@20 -- # val= 00:13:02.027 00:26:55 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # IFS=: 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # read -r var val 00:13:02.027 00:26:55 -- accel/accel.sh@20 -- # val= 00:13:02.027 00:26:55 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # IFS=: 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # read -r var val 00:13:02.027 00:26:55 -- accel/accel.sh@20 -- # val= 00:13:02.027 00:26:55 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # IFS=: 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # read -r var val 00:13:02.027 00:26:55 -- accel/accel.sh@20 -- # val= 00:13:02.027 00:26:55 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # IFS=: 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # read -r var val 00:13:02.027 00:26:55 -- accel/accel.sh@20 -- # val= 00:13:02.027 00:26:55 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # IFS=: 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # read -r var val 00:13:02.027 00:26:55 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:02.027 00:26:55 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:02.027 00:26:55 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:02.027 00:13:02.027 real 0m2.721s 00:13:02.027 user 0m2.432s 00:13:02.027 sys 0m0.216s 00:13:02.027 00:26:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:02.027 00:26:55 -- common/autotest_common.sh@10 -- # set +x 00:13:02.027 ************************************ 00:13:02.027 END TEST accel_decomp 00:13:02.027 ************************************ 00:13:02.027 00:26:55 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:02.027 00:26:55 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:02.027 00:26:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.027 00:26:55 -- common/autotest_common.sh@10 -- # set +x 00:13:02.027 ************************************ 00:13:02.027 START TEST accel_decmop_full 00:13:02.027 ************************************ 00:13:02.027 00:26:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:02.027 00:26:55 -- accel/accel.sh@16 -- # local accel_opc 00:13:02.027 00:26:55 -- accel/accel.sh@17 -- # local accel_module 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # IFS=: 00:13:02.027 00:26:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:02.027 00:26:55 -- accel/accel.sh@19 -- # read -r var val 00:13:02.027 00:26:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:02.027 00:26:55 -- accel/accel.sh@12 -- # build_accel_config 00:13:02.027 00:26:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:02.027 00:26:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:02.027 00:26:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:02.027 00:26:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:02.027 00:26:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:02.027 00:26:55 -- accel/accel.sh@40 -- # local IFS=, 00:13:02.027 00:26:55 -- accel/accel.sh@41 -- # jq -r . 00:13:02.027 [2024-04-24 00:26:55.786085] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:13:02.027 [2024-04-24 00:26:55.786244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115621 ] 00:13:02.285 [2024-04-24 00:26:55.951576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.543 [2024-04-24 00:26:56.197883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.800 00:26:56 -- accel/accel.sh@20 -- # val= 00:13:02.800 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.800 00:26:56 -- accel/accel.sh@20 -- # val= 00:13:02.800 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.800 00:26:56 -- accel/accel.sh@20 -- # val= 00:13:02.800 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.800 00:26:56 -- accel/accel.sh@20 -- # val=0x1 00:13:02.800 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.800 00:26:56 -- accel/accel.sh@20 -- # val= 00:13:02.800 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.800 00:26:56 -- accel/accel.sh@20 -- # val= 00:13:02.800 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.800 00:26:56 -- accel/accel.sh@20 -- # val=decompress 00:13:02.800 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.800 00:26:56 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.800 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.800 00:26:56 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:02.801 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.801 00:26:56 -- accel/accel.sh@20 -- # val= 00:13:02.801 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.801 00:26:56 -- accel/accel.sh@20 -- # val=software 00:13:02.801 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.801 00:26:56 -- accel/accel.sh@22 -- # accel_module=software 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.801 00:26:56 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:02.801 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.801 00:26:56 -- accel/accel.sh@20 -- # val=32 00:13:02.801 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.801 00:26:56 -- accel/accel.sh@20 -- # val=32 00:13:02.801 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.801 00:26:56 -- accel/accel.sh@20 -- # val=1 00:13:02.801 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.801 00:26:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:02.801 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.801 00:26:56 -- accel/accel.sh@20 -- # val=Yes 00:13:02.801 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.801 00:26:56 -- accel/accel.sh@20 -- # val= 00:13:02.801 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:02.801 00:26:56 -- accel/accel.sh@20 -- # val= 00:13:02.801 00:26:56 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # IFS=: 00:13:02.801 00:26:56 -- accel/accel.sh@19 -- # read -r var val 00:13:04.702 00:26:58 -- accel/accel.sh@20 -- # val= 00:13:04.702 00:26:58 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # IFS=: 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # read -r var val 00:13:04.702 00:26:58 -- accel/accel.sh@20 -- # val= 00:13:04.702 00:26:58 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # IFS=: 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # read -r var val 00:13:04.702 00:26:58 -- accel/accel.sh@20 -- # val= 00:13:04.702 00:26:58 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # IFS=: 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # read -r var val 00:13:04.702 00:26:58 -- accel/accel.sh@20 -- # val= 00:13:04.702 00:26:58 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # IFS=: 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # read -r var val 00:13:04.702 00:26:58 -- accel/accel.sh@20 -- # val= 00:13:04.702 00:26:58 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # IFS=: 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # read -r var val 00:13:04.702 00:26:58 -- accel/accel.sh@20 -- # val= 00:13:04.702 00:26:58 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # IFS=: 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # read -r var val 00:13:04.702 00:26:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:04.702 00:26:58 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:04.702 00:26:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:04.702 00:13:04.702 real 0m2.596s 00:13:04.702 user 0m2.317s 00:13:04.702 sys 0m0.223s 00:13:04.702 00:26:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:04.702 00:26:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.702 ************************************ 00:13:04.702 END TEST accel_decmop_full 00:13:04.702 ************************************ 00:13:04.702 00:26:58 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:04.702 00:26:58 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:04.702 00:26:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.702 00:26:58 -- common/autotest_common.sh@10 -- # set +x 00:13:04.702 ************************************ 00:13:04.702 START TEST accel_decomp_mcore 00:13:04.702 ************************************ 00:13:04.702 00:26:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:04.702 00:26:58 -- accel/accel.sh@16 -- # local accel_opc 00:13:04.702 00:26:58 -- accel/accel.sh@17 -- # local accel_module 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # IFS=: 00:13:04.702 00:26:58 -- accel/accel.sh@19 -- # read -r var val 00:13:04.702 00:26:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:04.702 00:26:58 -- accel/accel.sh@12 -- # build_accel_config 00:13:04.702 00:26:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:04.702 00:26:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:04.702 00:26:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:04.702 00:26:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:04.702 00:26:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:04.702 00:26:58 -- accel/accel.sh@40 -- # local IFS=, 00:13:04.702 00:26:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:04.702 00:26:58 -- accel/accel.sh@41 -- # jq -r . 00:13:04.959 [2024-04-24 00:26:58.498429] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:13:04.959 [2024-04-24 00:26:58.498577] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115677 ] 00:13:04.959 [2024-04-24 00:26:58.692442] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.523 [2024-04-24 00:26:59.030594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.523 [2024-04-24 00:26:59.030783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.523 [2024-04-24 00:26:59.030876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.523 [2024-04-24 00:26:59.030881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.781 00:26:59 -- accel/accel.sh@20 -- # val= 00:13:05.781 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.781 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.781 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.781 00:26:59 -- accel/accel.sh@20 -- # val= 00:13:05.781 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.781 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.781 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.781 00:26:59 -- accel/accel.sh@20 -- # val= 00:13:05.781 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.781 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.781 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.781 00:26:59 -- accel/accel.sh@20 -- # val=0xf 00:13:05.781 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.781 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.781 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.781 00:26:59 -- accel/accel.sh@20 -- # val= 00:13:05.781 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.781 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.781 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.781 00:26:59 -- accel/accel.sh@20 -- # val= 00:13:05.782 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.782 00:26:59 -- accel/accel.sh@20 -- # val=decompress 00:13:05.782 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.782 00:26:59 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.782 00:26:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:05.782 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.782 00:26:59 -- accel/accel.sh@20 -- # val= 00:13:05.782 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.782 00:26:59 -- accel/accel.sh@20 -- # val=software 00:13:05.782 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.782 00:26:59 -- accel/accel.sh@22 -- # accel_module=software 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.782 00:26:59 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:05.782 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.782 00:26:59 -- accel/accel.sh@20 -- # val=32 00:13:05.782 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.782 00:26:59 -- accel/accel.sh@20 -- # val=32 00:13:05.782 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.782 00:26:59 -- accel/accel.sh@20 -- # val=1 00:13:05.782 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.782 00:26:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:05.782 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.782 00:26:59 -- accel/accel.sh@20 -- # val=Yes 00:13:05.782 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.782 00:26:59 -- accel/accel.sh@20 -- # val= 00:13:05.782 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:05.782 00:26:59 -- accel/accel.sh@20 -- # val= 00:13:05.782 00:26:59 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # IFS=: 00:13:05.782 00:26:59 -- accel/accel.sh@19 -- # read -r var val 00:13:07.681 00:27:01 -- accel/accel.sh@20 -- # val= 00:13:07.681 00:27:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # IFS=: 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # read -r var val 00:13:07.681 00:27:01 -- accel/accel.sh@20 -- # val= 00:13:07.681 00:27:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # IFS=: 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # read -r var val 00:13:07.681 00:27:01 -- accel/accel.sh@20 -- # val= 00:13:07.681 00:27:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # IFS=: 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # read -r var val 00:13:07.681 00:27:01 -- accel/accel.sh@20 -- # val= 00:13:07.681 00:27:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # IFS=: 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # read -r var val 00:13:07.681 00:27:01 -- accel/accel.sh@20 -- # val= 00:13:07.681 00:27:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # IFS=: 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # read -r var val 00:13:07.681 00:27:01 -- accel/accel.sh@20 -- # val= 00:13:07.681 00:27:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # IFS=: 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # read -r var val 00:13:07.681 00:27:01 -- accel/accel.sh@20 -- # val= 00:13:07.681 00:27:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # IFS=: 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # read -r var val 00:13:07.681 00:27:01 -- accel/accel.sh@20 -- # val= 00:13:07.681 00:27:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # IFS=: 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # read -r var val 00:13:07.681 00:27:01 -- accel/accel.sh@20 -- # val= 00:13:07.681 00:27:01 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # IFS=: 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # read -r var val 00:13:07.681 00:27:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:07.681 00:27:01 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:07.681 00:27:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:07.681 00:13:07.681 real 0m2.886s 00:13:07.681 user 0m8.117s 00:13:07.681 sys 0m0.239s 00:13:07.681 ************************************ 00:13:07.681 00:27:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:07.681 00:27:01 -- common/autotest_common.sh@10 -- # set +x 00:13:07.681 END TEST accel_decomp_mcore 00:13:07.681 ************************************ 00:13:07.681 00:27:01 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:07.681 00:27:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:07.681 00:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:07.681 00:27:01 -- common/autotest_common.sh@10 -- # set +x 00:13:07.681 ************************************ 00:13:07.681 START TEST accel_decomp_full_mcore 00:13:07.681 ************************************ 00:13:07.681 00:27:01 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:07.681 00:27:01 -- accel/accel.sh@16 -- # local accel_opc 00:13:07.681 00:27:01 -- accel/accel.sh@17 -- # local accel_module 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # IFS=: 00:13:07.681 00:27:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:07.681 00:27:01 -- accel/accel.sh@19 -- # read -r var val 00:13:07.681 00:27:01 -- accel/accel.sh@12 -- # build_accel_config 00:13:07.681 00:27:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:07.681 00:27:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:07.681 00:27:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:07.681 00:27:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:07.681 00:27:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:07.681 00:27:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:07.681 00:27:01 -- accel/accel.sh@40 -- # local IFS=, 00:13:07.681 00:27:01 -- accel/accel.sh@41 -- # jq -r . 00:13:07.681 [2024-04-24 00:27:01.466471] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:13:07.682 [2024-04-24 00:27:01.466637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115750 ] 00:13:07.939 [2024-04-24 00:27:01.651350] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.197 [2024-04-24 00:27:01.918675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.197 [2024-04-24 00:27:01.918908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.197 [2024-04-24 00:27:01.918799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.197 [2024-04-24 00:27:01.918912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val= 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val= 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val= 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val=0xf 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val= 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val= 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val=decompress 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val= 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val=software 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@22 -- # accel_module=software 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val=32 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val=32 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val=1 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val=Yes 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.456 00:27:02 -- accel/accel.sh@20 -- # val= 00:13:08.456 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.456 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:08.457 00:27:02 -- accel/accel.sh@20 -- # val= 00:13:08.457 00:27:02 -- accel/accel.sh@21 -- # case "$var" in 00:13:08.457 00:27:02 -- accel/accel.sh@19 -- # IFS=: 00:13:08.457 00:27:02 -- accel/accel.sh@19 -- # read -r var val 00:13:11.022 00:27:04 -- accel/accel.sh@20 -- # val= 00:13:11.022 00:27:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.022 00:27:04 -- accel/accel.sh@19 -- # IFS=: 00:13:11.022 00:27:04 -- accel/accel.sh@19 -- # read -r var val 00:13:11.022 00:27:04 -- accel/accel.sh@20 -- # val= 00:13:11.022 00:27:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.022 00:27:04 -- accel/accel.sh@19 -- # IFS=: 00:13:11.022 00:27:04 -- accel/accel.sh@19 -- # read -r var val 00:13:11.022 00:27:04 -- accel/accel.sh@20 -- # val= 00:13:11.022 00:27:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.022 00:27:04 -- accel/accel.sh@19 -- # IFS=: 00:13:11.022 00:27:04 -- accel/accel.sh@19 -- # read -r var val 00:13:11.022 00:27:04 -- accel/accel.sh@20 -- # val= 00:13:11.022 00:27:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.022 00:27:04 -- accel/accel.sh@19 -- # IFS=: 00:13:11.022 00:27:04 -- accel/accel.sh@19 -- # read -r var val 00:13:11.022 00:27:04 -- accel/accel.sh@20 -- # val= 00:13:11.022 00:27:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.022 00:27:04 -- accel/accel.sh@19 -- # IFS=: 00:13:11.022 00:27:04 -- accel/accel.sh@19 -- # read -r var val 00:13:11.022 00:27:04 -- accel/accel.sh@20 -- # val= 00:13:11.022 00:27:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.022 00:27:04 -- accel/accel.sh@19 -- # IFS=: 00:13:11.022 00:27:04 -- accel/accel.sh@19 -- # read -r var val 00:13:11.023 00:27:04 -- accel/accel.sh@20 -- # val= 00:13:11.023 00:27:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.023 00:27:04 -- accel/accel.sh@19 -- # IFS=: 00:13:11.023 00:27:04 -- accel/accel.sh@19 -- # read -r var val 00:13:11.023 00:27:04 -- accel/accel.sh@20 -- # val= 00:13:11.023 00:27:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.023 00:27:04 -- accel/accel.sh@19 -- # IFS=: 00:13:11.023 00:27:04 -- accel/accel.sh@19 -- # read -r var val 00:13:11.023 00:27:04 -- accel/accel.sh@20 -- # val= 00:13:11.023 00:27:04 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.023 00:27:04 -- accel/accel.sh@19 -- # IFS=: 00:13:11.023 00:27:04 -- accel/accel.sh@19 -- # read -r var val 00:13:11.023 00:27:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:11.023 00:27:04 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:11.023 00:27:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:11.023 00:13:11.023 real 0m2.859s 00:13:11.023 user 0m8.386s 00:13:11.023 sys 0m0.195s 00:13:11.023 ************************************ 00:13:11.023 END TEST accel_decomp_full_mcore 00:13:11.023 ************************************ 00:13:11.023 00:27:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:11.023 00:27:04 -- common/autotest_common.sh@10 -- # set +x 00:13:11.023 00:27:04 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:11.023 00:27:04 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:11.023 00:27:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:11.023 00:27:04 -- common/autotest_common.sh@10 -- # set +x 00:13:11.023 ************************************ 00:13:11.023 START TEST accel_decomp_mthread 00:13:11.023 ************************************ 00:13:11.023 00:27:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:11.023 00:27:04 -- accel/accel.sh@16 -- # local accel_opc 00:13:11.023 00:27:04 -- accel/accel.sh@17 -- # local accel_module 00:13:11.023 00:27:04 -- accel/accel.sh@19 -- # IFS=: 00:13:11.023 00:27:04 -- accel/accel.sh@19 -- # read -r var val 00:13:11.023 00:27:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:11.023 00:27:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:11.023 00:27:04 -- accel/accel.sh@12 -- # build_accel_config 00:13:11.023 00:27:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:11.023 00:27:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:11.023 00:27:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:11.023 00:27:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:11.023 00:27:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:11.023 00:27:04 -- accel/accel.sh@40 -- # local IFS=, 00:13:11.023 00:27:04 -- accel/accel.sh@41 -- # jq -r . 00:13:11.023 [2024-04-24 00:27:04.405070] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:13:11.023 [2024-04-24 00:27:04.405285] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115815 ] 00:13:11.023 [2024-04-24 00:27:04.584715] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.281 [2024-04-24 00:27:04.911006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.539 00:27:05 -- accel/accel.sh@20 -- # val= 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val= 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val= 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val=0x1 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val= 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val= 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val=decompress 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val= 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val=software 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@22 -- # accel_module=software 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val=32 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val=32 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val=2 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val=Yes 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val= 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:11.540 00:27:05 -- accel/accel.sh@20 -- # val= 00:13:11.540 00:27:05 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # IFS=: 00:13:11.540 00:27:05 -- accel/accel.sh@19 -- # read -r var val 00:13:13.434 00:27:07 -- accel/accel.sh@20 -- # val= 00:13:13.434 00:27:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # IFS=: 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # read -r var val 00:13:13.434 00:27:07 -- accel/accel.sh@20 -- # val= 00:13:13.434 00:27:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # IFS=: 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # read -r var val 00:13:13.434 00:27:07 -- accel/accel.sh@20 -- # val= 00:13:13.434 00:27:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # IFS=: 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # read -r var val 00:13:13.434 00:27:07 -- accel/accel.sh@20 -- # val= 00:13:13.434 00:27:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # IFS=: 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # read -r var val 00:13:13.434 00:27:07 -- accel/accel.sh@20 -- # val= 00:13:13.434 00:27:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # IFS=: 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # read -r var val 00:13:13.434 00:27:07 -- accel/accel.sh@20 -- # val= 00:13:13.434 00:27:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # IFS=: 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # read -r var val 00:13:13.434 00:27:07 -- accel/accel.sh@20 -- # val= 00:13:13.434 00:27:07 -- accel/accel.sh@21 -- # case "$var" in 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # IFS=: 00:13:13.434 00:27:07 -- accel/accel.sh@19 -- # read -r var val 00:13:13.434 00:27:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:13.434 00:27:07 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:13.434 00:27:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:13.434 00:13:13.434 real 0m2.802s 00:13:13.434 user 0m2.536s 00:13:13.434 sys 0m0.196s 00:13:13.434 00:27:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:13.434 00:27:07 -- common/autotest_common.sh@10 -- # set +x 00:13:13.434 ************************************ 00:13:13.434 END TEST accel_decomp_mthread 00:13:13.434 ************************************ 00:13:13.434 00:27:07 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:13.434 00:27:07 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:13.434 00:27:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:13.434 00:27:07 -- common/autotest_common.sh@10 -- # set +x 00:13:13.694 ************************************ 00:13:13.694 START TEST accel_deomp_full_mthread 00:13:13.694 ************************************ 00:13:13.694 00:27:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:13.694 00:27:07 -- accel/accel.sh@16 -- # local accel_opc 00:13:13.694 00:27:07 -- accel/accel.sh@17 -- # local accel_module 00:13:13.694 00:27:07 -- accel/accel.sh@19 -- # IFS=: 00:13:13.694 00:27:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:13.694 00:27:07 -- accel/accel.sh@19 -- # read -r var val 00:13:13.695 00:27:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:13.695 00:27:07 -- accel/accel.sh@12 -- # build_accel_config 00:13:13.695 00:27:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:13.695 00:27:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:13.695 00:27:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:13.695 00:27:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:13.695 00:27:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:13.695 00:27:07 -- accel/accel.sh@40 -- # local IFS=, 00:13:13.695 00:27:07 -- accel/accel.sh@41 -- # jq -r . 00:13:13.695 [2024-04-24 00:27:07.286967] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:13:13.695 [2024-04-24 00:27:07.287392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115882 ] 00:13:13.695 [2024-04-24 00:27:07.471535] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.259 [2024-04-24 00:27:07.817701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.516 00:27:08 -- accel/accel.sh@20 -- # val= 00:13:14.516 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.516 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.516 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.516 00:27:08 -- accel/accel.sh@20 -- # val= 00:13:14.516 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.516 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.516 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val= 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val=0x1 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val= 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val= 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val=decompress 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val= 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val=software 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@22 -- # accel_module=software 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val=32 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val=32 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val=2 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val=Yes 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val= 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:14.517 00:27:08 -- accel/accel.sh@20 -- # val= 00:13:14.517 00:27:08 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # IFS=: 00:13:14.517 00:27:08 -- accel/accel.sh@19 -- # read -r var val 00:13:16.435 00:27:10 -- accel/accel.sh@20 -- # val= 00:13:16.435 00:27:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # IFS=: 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # read -r var val 00:13:16.435 00:27:10 -- accel/accel.sh@20 -- # val= 00:13:16.435 00:27:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # IFS=: 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # read -r var val 00:13:16.435 00:27:10 -- accel/accel.sh@20 -- # val= 00:13:16.435 00:27:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # IFS=: 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # read -r var val 00:13:16.435 00:27:10 -- accel/accel.sh@20 -- # val= 00:13:16.435 00:27:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # IFS=: 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # read -r var val 00:13:16.435 00:27:10 -- accel/accel.sh@20 -- # val= 00:13:16.435 00:27:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # IFS=: 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # read -r var val 00:13:16.435 00:27:10 -- accel/accel.sh@20 -- # val= 00:13:16.435 00:27:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # IFS=: 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # read -r var val 00:13:16.435 00:27:10 -- accel/accel.sh@20 -- # val= 00:13:16.435 00:27:10 -- accel/accel.sh@21 -- # case "$var" in 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # IFS=: 00:13:16.435 00:27:10 -- accel/accel.sh@19 -- # read -r var val 00:13:16.435 00:27:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:16.435 00:27:10 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:16.435 00:27:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:16.435 00:13:16.435 real 0m2.885s 00:13:16.435 user 0m2.597s 00:13:16.435 sys 0m0.209s 00:13:16.435 00:27:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:16.435 00:27:10 -- common/autotest_common.sh@10 -- # set +x 00:13:16.435 ************************************ 00:13:16.435 END TEST accel_deomp_full_mthread 00:13:16.435 ************************************ 00:13:16.435 00:27:10 -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:16.435 00:27:10 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:16.435 00:27:10 -- accel/accel.sh@137 -- # build_accel_config 00:13:16.435 00:27:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:16.435 00:27:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:16.435 00:27:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:16.435 00:27:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:16.435 00:27:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:16.435 00:27:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:16.435 00:27:10 -- common/autotest_common.sh@10 -- # set +x 00:13:16.435 00:27:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:16.435 00:27:10 -- accel/accel.sh@40 -- # local IFS=, 00:13:16.435 00:27:10 -- accel/accel.sh@41 -- # jq -r . 00:13:16.435 ************************************ 00:13:16.435 START TEST accel_dif_functional_tests 00:13:16.435 ************************************ 00:13:16.435 00:27:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:16.694 [2024-04-24 00:27:10.339710] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:13:16.694 [2024-04-24 00:27:10.340003] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115949 ] 00:13:16.995 [2024-04-24 00:27:10.536416] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:16.995 [2024-04-24 00:27:10.762344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.995 [2024-04-24 00:27:10.762507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.995 [2024-04-24 00:27:10.762511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.573 00:13:17.573 00:13:17.573 CUnit - A unit testing framework for C - Version 2.1-3 00:13:17.573 http://cunit.sourceforge.net/ 00:13:17.573 00:13:17.573 00:13:17.573 Suite: accel_dif 00:13:17.573 Test: verify: DIF generated, GUARD check ...passed 00:13:17.573 Test: verify: DIF generated, APPTAG check ...passed 00:13:17.573 Test: verify: DIF generated, REFTAG check ...passed 00:13:17.573 Test: verify: DIF not generated, GUARD check ...[2024-04-24 00:27:11.133967] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:17.573 [2024-04-24 00:27:11.134369] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:17.573 passed 00:13:17.573 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 00:27:11.134609] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:17.573 [2024-04-24 00:27:11.134796] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:17.573 passed 00:13:17.573 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 00:27:11.134981] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:17.573 [2024-04-24 00:27:11.135155] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:17.573 passed 00:13:17.573 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:17.573 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 00:27:11.135397] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:17.573 passed 00:13:17.573 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:17.573 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:17.573 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:17.573 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 00:27:11.135799] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:17.573 passed 00:13:17.573 Test: generate copy: DIF generated, GUARD check ...passed 00:13:17.573 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:17.573 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:17.573 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:17.573 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:17.573 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:17.573 Test: generate copy: iovecs-len validate ...[2024-04-24 00:27:11.136483] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:17.573 passed 00:13:17.573 Test: generate copy: buffer alignment validate ...passed 00:13:17.573 00:13:17.573 Run Summary: Type Total Ran Passed Failed Inactive 00:13:17.573 suites 1 1 n/a 0 0 00:13:17.573 tests 20 20 20 0 0 00:13:17.573 asserts 204 204 204 0 n/a 00:13:17.573 00:13:17.573 Elapsed time = 0.009 seconds 00:13:18.944 ************************************ 00:13:18.944 END TEST accel_dif_functional_tests 00:13:18.944 ************************************ 00:13:18.944 00:13:18.944 real 0m2.367s 00:13:18.944 user 0m4.572s 00:13:18.944 sys 0m0.315s 00:13:18.944 00:27:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:18.944 00:27:12 -- common/autotest_common.sh@10 -- # set +x 00:13:18.944 00:13:18.944 real 1m7.917s 00:13:18.944 user 1m13.998s 00:13:18.944 sys 0m6.905s 00:13:18.944 ************************************ 00:13:18.944 END TEST accel 00:13:18.944 ************************************ 00:13:18.944 00:27:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:18.944 00:27:12 -- common/autotest_common.sh@10 -- # set +x 00:13:18.944 00:27:12 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:18.944 00:27:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:18.944 00:27:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:18.944 00:27:12 -- common/autotest_common.sh@10 -- # set +x 00:13:18.944 ************************************ 00:13:18.944 START TEST accel_rpc 00:13:18.944 ************************************ 00:13:18.944 00:27:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:19.201 * Looking for test storage... 00:13:19.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:19.201 00:27:12 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:19.201 00:27:12 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=116048 00:13:19.201 00:27:12 -- accel/accel_rpc.sh@15 -- # waitforlisten 116048 00:13:19.201 00:27:12 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:19.201 00:27:12 -- common/autotest_common.sh@817 -- # '[' -z 116048 ']' 00:13:19.201 00:27:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.201 00:27:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:19.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.201 00:27:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.201 00:27:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:19.201 00:27:12 -- common/autotest_common.sh@10 -- # set +x 00:13:19.201 [2024-04-24 00:27:12.918099] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:13:19.201 [2024-04-24 00:27:12.919179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116048 ] 00:13:19.457 [2024-04-24 00:27:13.090789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.718 [2024-04-24 00:27:13.327047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.290 00:27:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:20.290 00:27:13 -- common/autotest_common.sh@850 -- # return 0 00:13:20.290 00:27:13 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:20.290 00:27:13 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:20.290 00:27:13 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:20.290 00:27:13 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:20.290 00:27:13 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:20.290 00:27:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:20.290 00:27:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:20.290 00:27:13 -- common/autotest_common.sh@10 -- # set +x 00:13:20.290 ************************************ 00:13:20.290 START TEST accel_assign_opcode 00:13:20.290 ************************************ 00:13:20.290 00:27:13 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:13:20.290 00:27:13 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:20.290 00:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.290 00:27:13 -- common/autotest_common.sh@10 -- # set +x 00:13:20.290 [2024-04-24 00:27:13.996185] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:20.290 00:27:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.290 00:27:13 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:20.290 00:27:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.290 00:27:13 -- common/autotest_common.sh@10 -- # set +x 00:13:20.290 [2024-04-24 00:27:14.004132] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:20.291 00:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.291 00:27:14 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:20.291 00:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.291 00:27:14 -- common/autotest_common.sh@10 -- # set +x 00:13:21.228 00:27:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.228 00:27:14 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:21.228 00:27:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:21.228 00:27:14 -- common/autotest_common.sh@10 -- # set +x 00:13:21.228 00:27:14 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:21.228 00:27:14 -- accel/accel_rpc.sh@42 -- # grep software 00:13:21.228 00:27:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.503 software 00:13:21.503 00:13:21.503 real 0m1.060s 00:13:21.503 user 0m0.057s 00:13:21.503 sys 0m0.005s 00:13:21.503 00:27:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:21.503 00:27:15 -- common/autotest_common.sh@10 -- # set +x 00:13:21.503 ************************************ 00:13:21.503 END TEST accel_assign_opcode 00:13:21.503 ************************************ 00:13:21.503 00:27:15 -- accel/accel_rpc.sh@55 -- # killprocess 116048 00:13:21.503 00:27:15 -- common/autotest_common.sh@936 -- # '[' -z 116048 ']' 00:13:21.503 00:27:15 -- common/autotest_common.sh@940 -- # kill -0 116048 00:13:21.503 00:27:15 -- common/autotest_common.sh@941 -- # uname 00:13:21.503 00:27:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:21.503 00:27:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116048 00:13:21.503 00:27:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:21.503 killing process with pid 116048 00:13:21.503 00:27:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:21.503 00:27:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116048' 00:13:21.503 00:27:15 -- common/autotest_common.sh@955 -- # kill 116048 00:13:21.503 00:27:15 -- common/autotest_common.sh@960 -- # wait 116048 00:13:24.781 00:13:24.781 real 0m5.142s 00:13:24.781 user 0m5.258s 00:13:24.781 sys 0m0.585s 00:13:24.781 ************************************ 00:13:24.781 END TEST accel_rpc 00:13:24.781 ************************************ 00:13:24.781 00:27:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:24.781 00:27:17 -- common/autotest_common.sh@10 -- # set +x 00:13:24.781 00:27:17 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:24.781 00:27:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:24.781 00:27:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:24.781 00:27:17 -- common/autotest_common.sh@10 -- # set +x 00:13:24.781 ************************************ 00:13:24.781 START TEST app_cmdline 00:13:24.781 ************************************ 00:13:24.781 00:27:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:24.781 * Looking for test storage... 00:13:24.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:24.782 00:27:18 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:24.782 00:27:18 -- app/cmdline.sh@17 -- # spdk_tgt_pid=116204 00:13:24.782 00:27:18 -- app/cmdline.sh@18 -- # waitforlisten 116204 00:13:24.782 00:27:18 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:24.782 00:27:18 -- common/autotest_common.sh@817 -- # '[' -z 116204 ']' 00:13:24.782 00:27:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.782 00:27:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:24.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.782 00:27:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.782 00:27:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:24.782 00:27:18 -- common/autotest_common.sh@10 -- # set +x 00:13:24.782 [2024-04-24 00:27:18.095668] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:13:24.782 [2024-04-24 00:27:18.095827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116204 ] 00:13:24.782 [2024-04-24 00:27:18.261121] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.782 [2024-04-24 00:27:18.551847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.155 00:27:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:26.155 00:27:19 -- common/autotest_common.sh@850 -- # return 0 00:13:26.155 00:27:19 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:26.155 { 00:13:26.155 "version": "SPDK v24.05-pre git sha1 9fa7361db", 00:13:26.155 "fields": { 00:13:26.155 "major": 24, 00:13:26.155 "minor": 5, 00:13:26.155 "patch": 0, 00:13:26.155 "suffix": "-pre", 00:13:26.155 "commit": "9fa7361db" 00:13:26.155 } 00:13:26.155 } 00:13:26.155 00:27:19 -- app/cmdline.sh@22 -- # expected_methods=() 00:13:26.155 00:27:19 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:26.155 00:27:19 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:26.155 00:27:19 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:26.155 00:27:19 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:26.155 00:27:19 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:26.155 00:27:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:26.155 00:27:19 -- common/autotest_common.sh@10 -- # set +x 00:13:26.155 00:27:19 -- app/cmdline.sh@26 -- # sort 00:13:26.155 00:27:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:26.155 00:27:19 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:26.155 00:27:19 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:26.155 00:27:19 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:26.155 00:27:19 -- common/autotest_common.sh@638 -- # local es=0 00:13:26.155 00:27:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:26.155 00:27:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:26.155 00:27:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:26.155 00:27:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:26.155 00:27:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:26.155 00:27:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:26.155 00:27:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:26.155 00:27:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:26.155 00:27:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:26.155 00:27:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:26.413 request: 00:13:26.413 { 00:13:26.413 "method": "env_dpdk_get_mem_stats", 00:13:26.413 "req_id": 1 00:13:26.413 } 00:13:26.413 Got JSON-RPC error response 00:13:26.413 response: 00:13:26.413 { 00:13:26.413 "code": -32601, 00:13:26.413 "message": "Method not found" 00:13:26.413 } 00:13:26.413 00:27:20 -- common/autotest_common.sh@641 -- # es=1 00:13:26.413 00:27:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:26.413 00:27:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:26.413 00:27:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:26.413 00:27:20 -- app/cmdline.sh@1 -- # killprocess 116204 00:13:26.413 00:27:20 -- common/autotest_common.sh@936 -- # '[' -z 116204 ']' 00:13:26.413 00:27:20 -- common/autotest_common.sh@940 -- # kill -0 116204 00:13:26.413 00:27:20 -- common/autotest_common.sh@941 -- # uname 00:13:26.413 00:27:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:26.413 00:27:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116204 00:13:26.413 00:27:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:26.413 00:27:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:26.413 00:27:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116204' 00:13:26.413 killing process with pid 116204 00:13:26.413 00:27:20 -- common/autotest_common.sh@955 -- # kill 116204 00:13:26.413 00:27:20 -- common/autotest_common.sh@960 -- # wait 116204 00:13:29.821 00:13:29.821 real 0m5.060s 00:13:29.821 user 0m5.570s 00:13:29.821 sys 0m0.619s 00:13:29.821 00:27:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:29.821 00:27:22 -- common/autotest_common.sh@10 -- # set +x 00:13:29.821 ************************************ 00:13:29.821 END TEST app_cmdline 00:13:29.821 ************************************ 00:13:29.821 00:27:23 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:29.821 00:27:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:29.821 00:27:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:29.821 00:27:23 -- common/autotest_common.sh@10 -- # set +x 00:13:29.821 ************************************ 00:13:29.821 START TEST version 00:13:29.821 ************************************ 00:13:29.821 00:27:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:29.821 * Looking for test storage... 00:13:29.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:29.821 00:27:23 -- app/version.sh@17 -- # get_header_version major 00:13:29.821 00:27:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:29.821 00:27:23 -- app/version.sh@14 -- # cut -f2 00:13:29.821 00:27:23 -- app/version.sh@14 -- # tr -d '"' 00:13:29.821 00:27:23 -- app/version.sh@17 -- # major=24 00:13:29.821 00:27:23 -- app/version.sh@18 -- # get_header_version minor 00:13:29.821 00:27:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:29.821 00:27:23 -- app/version.sh@14 -- # tr -d '"' 00:13:29.821 00:27:23 -- app/version.sh@14 -- # cut -f2 00:13:29.821 00:27:23 -- app/version.sh@18 -- # minor=5 00:13:29.821 00:27:23 -- app/version.sh@19 -- # get_header_version patch 00:13:29.821 00:27:23 -- app/version.sh@14 -- # cut -f2 00:13:29.821 00:27:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:29.821 00:27:23 -- app/version.sh@14 -- # tr -d '"' 00:13:29.821 00:27:23 -- app/version.sh@19 -- # patch=0 00:13:29.821 00:27:23 -- app/version.sh@20 -- # get_header_version suffix 00:13:29.821 00:27:23 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:29.821 00:27:23 -- app/version.sh@14 -- # tr -d '"' 00:13:29.821 00:27:23 -- app/version.sh@14 -- # cut -f2 00:13:29.821 00:27:23 -- app/version.sh@20 -- # suffix=-pre 00:13:29.821 00:27:23 -- app/version.sh@22 -- # version=24.5 00:13:29.821 00:27:23 -- app/version.sh@25 -- # (( patch != 0 )) 00:13:29.821 00:27:23 -- app/version.sh@28 -- # version=24.5rc0 00:13:29.821 00:27:23 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:29.821 00:27:23 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:29.821 00:27:23 -- app/version.sh@30 -- # py_version=24.5rc0 00:13:29.821 00:27:23 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:13:29.821 00:13:29.821 real 0m0.146s 00:13:29.821 user 0m0.092s 00:13:29.821 sys 0m0.087s 00:13:29.821 ************************************ 00:13:29.821 END TEST version 00:13:29.821 ************************************ 00:13:29.821 00:27:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:29.821 00:27:23 -- common/autotest_common.sh@10 -- # set +x 00:13:29.821 00:27:23 -- spdk/autotest.sh@184 -- # '[' 1 -eq 1 ']' 00:13:29.821 00:27:23 -- spdk/autotest.sh@185 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:29.821 00:27:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:29.821 00:27:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:29.821 00:27:23 -- common/autotest_common.sh@10 -- # set +x 00:13:29.821 ************************************ 00:13:29.821 START TEST blockdev_general 00:13:29.822 ************************************ 00:13:29.822 00:27:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:13:29.822 * Looking for test storage... 00:13:29.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:29.822 00:27:23 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:29.822 00:27:23 -- bdev/nbd_common.sh@6 -- # set -e 00:13:29.822 00:27:23 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:29.822 00:27:23 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:29.822 00:27:23 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:29.822 00:27:23 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:29.822 00:27:23 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:29.822 00:27:23 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:29.822 00:27:23 -- bdev/blockdev.sh@20 -- # : 00:13:29.822 00:27:23 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:13:29.822 00:27:23 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:13:29.822 00:27:23 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:13:29.822 00:27:23 -- bdev/blockdev.sh@674 -- # uname -s 00:13:29.822 00:27:23 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:13:29.822 00:27:23 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:13:29.822 00:27:23 -- bdev/blockdev.sh@682 -- # test_type=bdev 00:13:29.822 00:27:23 -- bdev/blockdev.sh@683 -- # crypto_device= 00:13:29.822 00:27:23 -- bdev/blockdev.sh@684 -- # dek= 00:13:29.822 00:27:23 -- bdev/blockdev.sh@685 -- # env_ctx= 00:13:29.822 00:27:23 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:13:29.822 00:27:23 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:13:29.822 00:27:23 -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:13:29.822 00:27:23 -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:13:29.822 00:27:23 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:13:29.822 00:27:23 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=116412 00:13:29.822 00:27:23 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:13:29.822 00:27:23 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:29.822 00:27:23 -- bdev/blockdev.sh@49 -- # waitforlisten 116412 00:13:29.822 00:27:23 -- common/autotest_common.sh@817 -- # '[' -z 116412 ']' 00:13:29.822 00:27:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.822 00:27:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:29.822 00:27:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.822 00:27:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:29.822 00:27:23 -- common/autotest_common.sh@10 -- # set +x 00:13:29.822 [2024-04-24 00:27:23.482482] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:13:29.822 [2024-04-24 00:27:23.482648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116412 ] 00:13:30.090 [2024-04-24 00:27:23.650886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.364 [2024-04-24 00:27:23.890743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.639 00:27:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:30.639 00:27:24 -- common/autotest_common.sh@850 -- # return 0 00:13:30.639 00:27:24 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:13:30.639 00:27:24 -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:13:30.639 00:27:24 -- bdev/blockdev.sh@53 -- # rpc_cmd 00:13:30.639 00:27:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:30.639 00:27:24 -- common/autotest_common.sh@10 -- # set +x 00:13:32.023 [2024-04-24 00:27:25.412718] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:32.024 [2024-04-24 00:27:25.412808] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:32.024 00:13:32.024 [2024-04-24 00:27:25.420661] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:32.024 [2024-04-24 00:27:25.420738] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:32.024 00:13:32.024 Malloc0 00:13:32.024 Malloc1 00:13:32.024 Malloc2 00:13:32.024 Malloc3 00:13:32.024 Malloc4 00:13:32.024 Malloc5 00:13:32.024 Malloc6 00:13:32.296 Malloc7 00:13:32.296 Malloc8 00:13:32.296 Malloc9 00:13:32.296 [2024-04-24 00:27:25.951585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:32.296 [2024-04-24 00:27:25.951697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.296 [2024-04-24 00:27:25.951740] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:32.296 [2024-04-24 00:27:25.951762] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.296 [2024-04-24 00:27:25.954476] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.296 [2024-04-24 00:27:25.954574] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:32.296 TestPT 00:13:32.296 00:27:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.296 00:27:25 -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:13:32.296 5000+0 records in 00:13:32.296 5000+0 records out 00:13:32.296 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0342711 s, 299 MB/s 00:13:32.296 00:27:26 -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:13:32.296 00:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.296 00:27:26 -- common/autotest_common.sh@10 -- # set +x 00:13:32.565 AIO0 00:13:32.565 00:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.565 00:27:26 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:13:32.565 00:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.565 00:27:26 -- common/autotest_common.sh@10 -- # set +x 00:13:32.565 00:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.565 00:27:26 -- bdev/blockdev.sh@740 -- # cat 00:13:32.565 00:27:26 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:13:32.565 00:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.565 00:27:26 -- common/autotest_common.sh@10 -- # set +x 00:13:32.565 00:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.565 00:27:26 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:13:32.565 00:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.565 00:27:26 -- common/autotest_common.sh@10 -- # set +x 00:13:32.565 00:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.565 00:27:26 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:32.565 00:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.565 00:27:26 -- common/autotest_common.sh@10 -- # set +x 00:13:32.565 00:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.565 00:27:26 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:13:32.565 00:27:26 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:13:32.565 00:27:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:32.565 00:27:26 -- common/autotest_common.sh@10 -- # set +x 00:13:32.565 00:27:26 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:13:32.565 00:27:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:32.565 00:27:26 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:13:32.565 00:27:26 -- bdev/blockdev.sh@749 -- # jq -r .name 00:13:32.567 00:27:26 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "a5e51566-0212-4393-8e3c-306f8b9a05c3"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a5e51566-0212-4393-8e3c-306f8b9a05c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "1d18f256-753d-5d6d-bf17-39566615a922"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1d18f256-753d-5d6d-bf17-39566615a922",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "a3a7e419-114e-5aff-8cdf-c9cc54b4cc8f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "a3a7e419-114e-5aff-8cdf-c9cc54b4cc8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "31fe191e-86a1-5fe9-988f-c5935725b207"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "31fe191e-86a1-5fe9-988f-c5935725b207",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7a477e20-dd40-5722-8987-d4c52c49a0b5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7a477e20-dd40-5722-8987-d4c52c49a0b5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "2b402205-0bd3-5689-b56c-e72ed1b9971f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2b402205-0bd3-5689-b56c-e72ed1b9971f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "442c9ffa-4bd8-55da-95ac-bb40b85354aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "442c9ffa-4bd8-55da-95ac-bb40b85354aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "335d7e56-6992-5e91-920d-e1a86ea15bc6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "335d7e56-6992-5e91-920d-e1a86ea15bc6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "7ab84337-29d9-548d-a320-edfaa1f9b268"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7ab84337-29d9-548d-a320-edfaa1f9b268",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "fe87f3c2-531e-5f3c-82ec-3767313f1d27"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fe87f3c2-531e-5f3c-82ec-3767313f1d27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "2168058e-35b6-553f-8035-fdbbd63f6e06"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2168058e-35b6-553f-8035-fdbbd63f6e06",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "40c66c3c-f367-5630-a5b7-3b198dd3694c"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "40c66c3c-f367-5630-a5b7-3b198dd3694c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "691673d3-5879-4ea0-9b23-3d70558f8087"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "691673d3-5879-4ea0-9b23-3d70558f8087",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "691673d3-5879-4ea0-9b23-3d70558f8087",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "dea696ff-f5c5-4aea-be0b-23fb451f5383",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "df423bac-5bcd-401f-aa1f-29b5da62b006",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "67c4df71-e620-4d2e-b721-bea4ddada5ba"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "67c4df71-e620-4d2e-b721-bea4ddada5ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "67c4df71-e620-4d2e-b721-bea4ddada5ba",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "49180963-26e2-4b23-ab21-35b2bdf84d27",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "f4c1e0be-7f38-4f71-b591-f89975e12450",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "7d742d9e-1189-4998-b241-de5222bb39f3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "7d742d9e-1189-4998-b241-de5222bb39f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7d742d9e-1189-4998-b241-de5222bb39f3",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "918cd98a-7ccc-4f42-8f02-b0066c133702",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7a23faeb-86a1-4a91-be22-40dbb525b21e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "dd2a1a51-12b1-459d-8af2-67824642e999"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "dd2a1a51-12b1-459d-8af2-67824642e999",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:32.567 00:27:26 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:13:32.567 00:27:26 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:13:32.567 00:27:26 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:13:32.567 00:27:26 -- bdev/blockdev.sh@754 -- # killprocess 116412 00:13:32.567 00:27:26 -- common/autotest_common.sh@936 -- # '[' -z 116412 ']' 00:13:32.567 00:27:26 -- common/autotest_common.sh@940 -- # kill -0 116412 00:13:32.567 00:27:26 -- common/autotest_common.sh@941 -- # uname 00:13:32.567 00:27:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:32.567 00:27:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116412 00:13:32.567 00:27:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:32.567 00:27:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:32.567 killing process with pid 116412 00:13:32.567 00:27:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116412' 00:13:32.567 00:27:26 -- common/autotest_common.sh@955 -- # kill 116412 00:13:32.567 00:27:26 -- common/autotest_common.sh@960 -- # wait 116412 00:13:36.763 00:27:30 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:36.763 00:27:30 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:36.763 00:27:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:13:36.763 00:27:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.763 00:27:30 -- common/autotest_common.sh@10 -- # set +x 00:13:36.763 ************************************ 00:13:36.763 START TEST bdev_hello_world 00:13:36.763 ************************************ 00:13:36.763 00:27:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:13:36.764 [2024-04-24 00:27:30.440865] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:13:36.764 [2024-04-24 00:27:30.441365] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116527 ] 00:13:37.021 [2024-04-24 00:27:30.633369] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.280 [2024-04-24 00:27:30.952001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.847 [2024-04-24 00:27:31.479360] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:37.847 [2024-04-24 00:27:31.479483] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:37.847 [2024-04-24 00:27:31.487328] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:37.847 [2024-04-24 00:27:31.487420] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:37.847 [2024-04-24 00:27:31.495346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:37.847 [2024-04-24 00:27:31.495411] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:37.847 [2024-04-24 00:27:31.495455] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:38.104 [2024-04-24 00:27:31.741259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:38.104 [2024-04-24 00:27:31.741427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.104 [2024-04-24 00:27:31.741474] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:38.104 [2024-04-24 00:27:31.741508] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.104 [2024-04-24 00:27:31.744317] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.104 [2024-04-24 00:27:31.744391] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:38.362 [2024-04-24 00:27:32.140018] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:38.362 [2024-04-24 00:27:32.140141] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:13:38.362 [2024-04-24 00:27:32.140213] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:38.362 [2024-04-24 00:27:32.140305] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:38.362 [2024-04-24 00:27:32.140383] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:38.362 [2024-04-24 00:27:32.140406] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:38.362 [2024-04-24 00:27:32.140475] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:38.362 00:13:38.362 [2024-04-24 00:27:32.140525] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:41.641 00:13:41.641 real 0m4.635s 00:13:41.641 user 0m4.068s 00:13:41.641 sys 0m0.399s 00:13:41.641 00:27:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:41.641 00:27:34 -- common/autotest_common.sh@10 -- # set +x 00:13:41.641 ************************************ 00:13:41.641 END TEST bdev_hello_world 00:13:41.641 ************************************ 00:13:41.641 00:27:35 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:13:41.641 00:27:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:41.641 00:27:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.641 00:27:35 -- common/autotest_common.sh@10 -- # set +x 00:13:41.641 ************************************ 00:13:41.641 START TEST bdev_bounds 00:13:41.641 ************************************ 00:13:41.641 00:27:35 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:13:41.641 00:27:35 -- bdev/blockdev.sh@290 -- # bdevio_pid=116611 00:13:41.641 00:27:35 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:41.641 Process bdevio pid: 116611 00:13:41.641 00:27:35 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 116611' 00:13:41.641 00:27:35 -- bdev/blockdev.sh@293 -- # waitforlisten 116611 00:13:41.641 00:27:35 -- common/autotest_common.sh@817 -- # '[' -z 116611 ']' 00:13:41.641 00:27:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.641 00:27:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:41.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.641 00:27:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.642 00:27:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:41.642 00:27:35 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:41.642 00:27:35 -- common/autotest_common.sh@10 -- # set +x 00:13:41.642 [2024-04-24 00:27:35.173882] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:13:41.642 [2024-04-24 00:27:35.174411] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116611 ] 00:13:41.642 [2024-04-24 00:27:35.380109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.899 [2024-04-24 00:27:35.637091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.899 [2024-04-24 00:27:35.637177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.899 [2024-04-24 00:27:35.637185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.465 [2024-04-24 00:27:36.098696] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:42.465 [2024-04-24 00:27:36.098848] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:42.465 [2024-04-24 00:27:36.106650] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:42.465 [2024-04-24 00:27:36.106787] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:42.465 [2024-04-24 00:27:36.114684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:42.465 [2024-04-24 00:27:36.114814] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:42.465 [2024-04-24 00:27:36.114851] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:42.724 [2024-04-24 00:27:36.338710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:42.724 [2024-04-24 00:27:36.338909] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.724 [2024-04-24 00:27:36.339001] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:42.724 [2024-04-24 00:27:36.339037] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.724 [2024-04-24 00:27:36.342440] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.724 [2024-04-24 00:27:36.342525] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:42.982 00:27:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:42.982 00:27:36 -- common/autotest_common.sh@850 -- # return 0 00:13:42.982 00:27:36 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:43.240 I/O targets: 00:13:43.240 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:13:43.240 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:13:43.240 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:13:43.240 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:13:43.240 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:13:43.240 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:13:43.240 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:13:43.240 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:13:43.240 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:13:43.240 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:13:43.240 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:13:43.241 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:13:43.241 raid0: 131072 blocks of 512 bytes (64 MiB) 00:13:43.241 concat0: 131072 blocks of 512 bytes (64 MiB) 00:13:43.241 raid1: 65536 blocks of 512 bytes (32 MiB) 00:13:43.241 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:13:43.241 00:13:43.241 00:13:43.241 CUnit - A unit testing framework for C - Version 2.1-3 00:13:43.241 http://cunit.sourceforge.net/ 00:13:43.241 00:13:43.241 00:13:43.241 Suite: bdevio tests on: AIO0 00:13:43.241 Test: blockdev write read block ...passed 00:13:43.241 Test: blockdev write zeroes read block ...passed 00:13:43.241 Test: blockdev write zeroes read no split ...passed 00:13:43.241 Test: blockdev write zeroes read split ...passed 00:13:43.241 Test: blockdev write zeroes read split partial ...passed 00:13:43.241 Test: blockdev reset ...passed 00:13:43.241 Test: blockdev write read 8 blocks ...passed 00:13:43.241 Test: blockdev write read size > 128k ...passed 00:13:43.241 Test: blockdev write read invalid size ...passed 00:13:43.241 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.241 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.241 Test: blockdev write read max offset ...passed 00:13:43.241 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.241 Test: blockdev writev readv 8 blocks ...passed 00:13:43.241 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.241 Test: blockdev writev readv block ...passed 00:13:43.241 Test: blockdev writev readv size > 128k ...passed 00:13:43.241 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.241 Test: blockdev comparev and writev ...passed 00:13:43.241 Test: blockdev nvme passthru rw ...passed 00:13:43.241 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.241 Test: blockdev nvme admin passthru ...passed 00:13:43.241 Test: blockdev copy ...passed 00:13:43.241 Suite: bdevio tests on: raid1 00:13:43.241 Test: blockdev write read block ...passed 00:13:43.241 Test: blockdev write zeroes read block ...passed 00:13:43.241 Test: blockdev write zeroes read no split ...passed 00:13:43.241 Test: blockdev write zeroes read split ...passed 00:13:43.241 Test: blockdev write zeroes read split partial ...passed 00:13:43.241 Test: blockdev reset ...passed 00:13:43.241 Test: blockdev write read 8 blocks ...passed 00:13:43.241 Test: blockdev write read size > 128k ...passed 00:13:43.241 Test: blockdev write read invalid size ...passed 00:13:43.241 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.241 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.241 Test: blockdev write read max offset ...passed 00:13:43.241 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.241 Test: blockdev writev readv 8 blocks ...passed 00:13:43.241 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.241 Test: blockdev writev readv block ...passed 00:13:43.241 Test: blockdev writev readv size > 128k ...passed 00:13:43.241 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.241 Test: blockdev comparev and writev ...passed 00:13:43.241 Test: blockdev nvme passthru rw ...passed 00:13:43.241 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.241 Test: blockdev nvme admin passthru ...passed 00:13:43.241 Test: blockdev copy ...passed 00:13:43.241 Suite: bdevio tests on: concat0 00:13:43.241 Test: blockdev write read block ...passed 00:13:43.241 Test: blockdev write zeroes read block ...passed 00:13:43.241 Test: blockdev write zeroes read no split ...passed 00:13:43.499 Test: blockdev write zeroes read split ...passed 00:13:43.499 Test: blockdev write zeroes read split partial ...passed 00:13:43.499 Test: blockdev reset ...passed 00:13:43.499 Test: blockdev write read 8 blocks ...passed 00:13:43.499 Test: blockdev write read size > 128k ...passed 00:13:43.499 Test: blockdev write read invalid size ...passed 00:13:43.499 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.499 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.499 Test: blockdev write read max offset ...passed 00:13:43.499 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.499 Test: blockdev writev readv 8 blocks ...passed 00:13:43.499 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.499 Test: blockdev writev readv block ...passed 00:13:43.499 Test: blockdev writev readv size > 128k ...passed 00:13:43.499 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.499 Test: blockdev comparev and writev ...passed 00:13:43.499 Test: blockdev nvme passthru rw ...passed 00:13:43.499 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.499 Test: blockdev nvme admin passthru ...passed 00:13:43.499 Test: blockdev copy ...passed 00:13:43.499 Suite: bdevio tests on: raid0 00:13:43.499 Test: blockdev write read block ...passed 00:13:43.499 Test: blockdev write zeroes read block ...passed 00:13:43.499 Test: blockdev write zeroes read no split ...passed 00:13:43.499 Test: blockdev write zeroes read split ...passed 00:13:43.499 Test: blockdev write zeroes read split partial ...passed 00:13:43.499 Test: blockdev reset ...passed 00:13:43.499 Test: blockdev write read 8 blocks ...passed 00:13:43.499 Test: blockdev write read size > 128k ...passed 00:13:43.499 Test: blockdev write read invalid size ...passed 00:13:43.499 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.499 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.499 Test: blockdev write read max offset ...passed 00:13:43.499 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.499 Test: blockdev writev readv 8 blocks ...passed 00:13:43.499 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.499 Test: blockdev writev readv block ...passed 00:13:43.499 Test: blockdev writev readv size > 128k ...passed 00:13:43.499 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.499 Test: blockdev comparev and writev ...passed 00:13:43.499 Test: blockdev nvme passthru rw ...passed 00:13:43.499 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.499 Test: blockdev nvme admin passthru ...passed 00:13:43.499 Test: blockdev copy ...passed 00:13:43.499 Suite: bdevio tests on: TestPT 00:13:43.499 Test: blockdev write read block ...passed 00:13:43.499 Test: blockdev write zeroes read block ...passed 00:13:43.499 Test: blockdev write zeroes read no split ...passed 00:13:43.499 Test: blockdev write zeroes read split ...passed 00:13:43.499 Test: blockdev write zeroes read split partial ...passed 00:13:43.499 Test: blockdev reset ...passed 00:13:43.499 Test: blockdev write read 8 blocks ...passed 00:13:43.499 Test: blockdev write read size > 128k ...passed 00:13:43.499 Test: blockdev write read invalid size ...passed 00:13:43.499 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.499 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.499 Test: blockdev write read max offset ...passed 00:13:43.499 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.499 Test: blockdev writev readv 8 blocks ...passed 00:13:43.499 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.499 Test: blockdev writev readv block ...passed 00:13:43.499 Test: blockdev writev readv size > 128k ...passed 00:13:43.499 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.499 Test: blockdev comparev and writev ...passed 00:13:43.499 Test: blockdev nvme passthru rw ...passed 00:13:43.499 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.499 Test: blockdev nvme admin passthru ...passed 00:13:43.499 Test: blockdev copy ...passed 00:13:43.499 Suite: bdevio tests on: Malloc2p7 00:13:43.499 Test: blockdev write read block ...passed 00:13:43.499 Test: blockdev write zeroes read block ...passed 00:13:43.499 Test: blockdev write zeroes read no split ...passed 00:13:43.758 Test: blockdev write zeroes read split ...passed 00:13:43.758 Test: blockdev write zeroes read split partial ...passed 00:13:43.758 Test: blockdev reset ...passed 00:13:43.758 Test: blockdev write read 8 blocks ...passed 00:13:43.758 Test: blockdev write read size > 128k ...passed 00:13:43.758 Test: blockdev write read invalid size ...passed 00:13:43.758 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.758 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.758 Test: blockdev write read max offset ...passed 00:13:43.758 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.758 Test: blockdev writev readv 8 blocks ...passed 00:13:43.758 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.758 Test: blockdev writev readv block ...passed 00:13:43.758 Test: blockdev writev readv size > 128k ...passed 00:13:43.758 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.758 Test: blockdev comparev and writev ...passed 00:13:43.758 Test: blockdev nvme passthru rw ...passed 00:13:43.758 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.758 Test: blockdev nvme admin passthru ...passed 00:13:43.758 Test: blockdev copy ...passed 00:13:43.758 Suite: bdevio tests on: Malloc2p6 00:13:43.758 Test: blockdev write read block ...passed 00:13:43.758 Test: blockdev write zeroes read block ...passed 00:13:43.758 Test: blockdev write zeroes read no split ...passed 00:13:43.758 Test: blockdev write zeroes read split ...passed 00:13:43.758 Test: blockdev write zeroes read split partial ...passed 00:13:43.758 Test: blockdev reset ...passed 00:13:43.758 Test: blockdev write read 8 blocks ...passed 00:13:43.758 Test: blockdev write read size > 128k ...passed 00:13:43.758 Test: blockdev write read invalid size ...passed 00:13:43.758 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.758 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.758 Test: blockdev write read max offset ...passed 00:13:43.758 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.758 Test: blockdev writev readv 8 blocks ...passed 00:13:43.758 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.758 Test: blockdev writev readv block ...passed 00:13:43.758 Test: blockdev writev readv size > 128k ...passed 00:13:43.758 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.758 Test: blockdev comparev and writev ...passed 00:13:43.758 Test: blockdev nvme passthru rw ...passed 00:13:43.758 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.758 Test: blockdev nvme admin passthru ...passed 00:13:43.758 Test: blockdev copy ...passed 00:13:43.758 Suite: bdevio tests on: Malloc2p5 00:13:43.758 Test: blockdev write read block ...passed 00:13:43.758 Test: blockdev write zeroes read block ...passed 00:13:43.758 Test: blockdev write zeroes read no split ...passed 00:13:43.758 Test: blockdev write zeroes read split ...passed 00:13:43.758 Test: blockdev write zeroes read split partial ...passed 00:13:43.758 Test: blockdev reset ...passed 00:13:43.758 Test: blockdev write read 8 blocks ...passed 00:13:43.758 Test: blockdev write read size > 128k ...passed 00:13:43.758 Test: blockdev write read invalid size ...passed 00:13:43.758 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:43.758 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:43.758 Test: blockdev write read max offset ...passed 00:13:43.758 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:43.758 Test: blockdev writev readv 8 blocks ...passed 00:13:43.758 Test: blockdev writev readv 30 x 1block ...passed 00:13:43.758 Test: blockdev writev readv block ...passed 00:13:43.758 Test: blockdev writev readv size > 128k ...passed 00:13:43.758 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:43.758 Test: blockdev comparev and writev ...passed 00:13:43.758 Test: blockdev nvme passthru rw ...passed 00:13:43.758 Test: blockdev nvme passthru vendor specific ...passed 00:13:43.758 Test: blockdev nvme admin passthru ...passed 00:13:43.758 Test: blockdev copy ...passed 00:13:43.758 Suite: bdevio tests on: Malloc2p4 00:13:43.758 Test: blockdev write read block ...passed 00:13:43.758 Test: blockdev write zeroes read block ...passed 00:13:43.758 Test: blockdev write zeroes read no split ...passed 00:13:43.758 Test: blockdev write zeroes read split ...passed 00:13:44.016 Test: blockdev write zeroes read split partial ...passed 00:13:44.016 Test: blockdev reset ...passed 00:13:44.016 Test: blockdev write read 8 blocks ...passed 00:13:44.016 Test: blockdev write read size > 128k ...passed 00:13:44.016 Test: blockdev write read invalid size ...passed 00:13:44.016 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:44.016 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:44.016 Test: blockdev write read max offset ...passed 00:13:44.016 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:44.016 Test: blockdev writev readv 8 blocks ...passed 00:13:44.016 Test: blockdev writev readv 30 x 1block ...passed 00:13:44.016 Test: blockdev writev readv block ...passed 00:13:44.016 Test: blockdev writev readv size > 128k ...passed 00:13:44.016 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:44.016 Test: blockdev comparev and writev ...passed 00:13:44.016 Test: blockdev nvme passthru rw ...passed 00:13:44.016 Test: blockdev nvme passthru vendor specific ...passed 00:13:44.016 Test: blockdev nvme admin passthru ...passed 00:13:44.016 Test: blockdev copy ...passed 00:13:44.016 Suite: bdevio tests on: Malloc2p3 00:13:44.016 Test: blockdev write read block ...passed 00:13:44.016 Test: blockdev write zeroes read block ...passed 00:13:44.016 Test: blockdev write zeroes read no split ...passed 00:13:44.016 Test: blockdev write zeroes read split ...passed 00:13:44.016 Test: blockdev write zeroes read split partial ...passed 00:13:44.016 Test: blockdev reset ...passed 00:13:44.016 Test: blockdev write read 8 blocks ...passed 00:13:44.016 Test: blockdev write read size > 128k ...passed 00:13:44.016 Test: blockdev write read invalid size ...passed 00:13:44.016 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:44.016 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:44.016 Test: blockdev write read max offset ...passed 00:13:44.016 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:44.016 Test: blockdev writev readv 8 blocks ...passed 00:13:44.016 Test: blockdev writev readv 30 x 1block ...passed 00:13:44.016 Test: blockdev writev readv block ...passed 00:13:44.016 Test: blockdev writev readv size > 128k ...passed 00:13:44.016 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:44.016 Test: blockdev comparev and writev ...passed 00:13:44.016 Test: blockdev nvme passthru rw ...passed 00:13:44.016 Test: blockdev nvme passthru vendor specific ...passed 00:13:44.016 Test: blockdev nvme admin passthru ...passed 00:13:44.016 Test: blockdev copy ...passed 00:13:44.016 Suite: bdevio tests on: Malloc2p2 00:13:44.016 Test: blockdev write read block ...passed 00:13:44.016 Test: blockdev write zeroes read block ...passed 00:13:44.016 Test: blockdev write zeroes read no split ...passed 00:13:44.016 Test: blockdev write zeroes read split ...passed 00:13:44.016 Test: blockdev write zeroes read split partial ...passed 00:13:44.016 Test: blockdev reset ...passed 00:13:44.016 Test: blockdev write read 8 blocks ...passed 00:13:44.016 Test: blockdev write read size > 128k ...passed 00:13:44.016 Test: blockdev write read invalid size ...passed 00:13:44.016 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:44.016 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:44.016 Test: blockdev write read max offset ...passed 00:13:44.016 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:44.016 Test: blockdev writev readv 8 blocks ...passed 00:13:44.016 Test: blockdev writev readv 30 x 1block ...passed 00:13:44.016 Test: blockdev writev readv block ...passed 00:13:44.016 Test: blockdev writev readv size > 128k ...passed 00:13:44.016 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:44.016 Test: blockdev comparev and writev ...passed 00:13:44.016 Test: blockdev nvme passthru rw ...passed 00:13:44.016 Test: blockdev nvme passthru vendor specific ...passed 00:13:44.016 Test: blockdev nvme admin passthru ...passed 00:13:44.016 Test: blockdev copy ...passed 00:13:44.016 Suite: bdevio tests on: Malloc2p1 00:13:44.016 Test: blockdev write read block ...passed 00:13:44.016 Test: blockdev write zeroes read block ...passed 00:13:44.016 Test: blockdev write zeroes read no split ...passed 00:13:44.016 Test: blockdev write zeroes read split ...passed 00:13:44.273 Test: blockdev write zeroes read split partial ...passed 00:13:44.273 Test: blockdev reset ...passed 00:13:44.273 Test: blockdev write read 8 blocks ...passed 00:13:44.273 Test: blockdev write read size > 128k ...passed 00:13:44.273 Test: blockdev write read invalid size ...passed 00:13:44.273 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:44.273 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:44.273 Test: blockdev write read max offset ...passed 00:13:44.273 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:44.273 Test: blockdev writev readv 8 blocks ...passed 00:13:44.273 Test: blockdev writev readv 30 x 1block ...passed 00:13:44.273 Test: blockdev writev readv block ...passed 00:13:44.273 Test: blockdev writev readv size > 128k ...passed 00:13:44.273 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:44.273 Test: blockdev comparev and writev ...passed 00:13:44.273 Test: blockdev nvme passthru rw ...passed 00:13:44.273 Test: blockdev nvme passthru vendor specific ...passed 00:13:44.273 Test: blockdev nvme admin passthru ...passed 00:13:44.273 Test: blockdev copy ...passed 00:13:44.273 Suite: bdevio tests on: Malloc2p0 00:13:44.273 Test: blockdev write read block ...passed 00:13:44.273 Test: blockdev write zeroes read block ...passed 00:13:44.273 Test: blockdev write zeroes read no split ...passed 00:13:44.273 Test: blockdev write zeroes read split ...passed 00:13:44.273 Test: blockdev write zeroes read split partial ...passed 00:13:44.273 Test: blockdev reset ...passed 00:13:44.273 Test: blockdev write read 8 blocks ...passed 00:13:44.273 Test: blockdev write read size > 128k ...passed 00:13:44.273 Test: blockdev write read invalid size ...passed 00:13:44.273 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:44.273 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:44.273 Test: blockdev write read max offset ...passed 00:13:44.273 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:44.273 Test: blockdev writev readv 8 blocks ...passed 00:13:44.273 Test: blockdev writev readv 30 x 1block ...passed 00:13:44.273 Test: blockdev writev readv block ...passed 00:13:44.273 Test: blockdev writev readv size > 128k ...passed 00:13:44.273 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:44.273 Test: blockdev comparev and writev ...passed 00:13:44.273 Test: blockdev nvme passthru rw ...passed 00:13:44.273 Test: blockdev nvme passthru vendor specific ...passed 00:13:44.274 Test: blockdev nvme admin passthru ...passed 00:13:44.274 Test: blockdev copy ...passed 00:13:44.274 Suite: bdevio tests on: Malloc1p1 00:13:44.274 Test: blockdev write read block ...passed 00:13:44.274 Test: blockdev write zeroes read block ...passed 00:13:44.274 Test: blockdev write zeroes read no split ...passed 00:13:44.274 Test: blockdev write zeroes read split ...passed 00:13:44.274 Test: blockdev write zeroes read split partial ...passed 00:13:44.274 Test: blockdev reset ...passed 00:13:44.274 Test: blockdev write read 8 blocks ...passed 00:13:44.274 Test: blockdev write read size > 128k ...passed 00:13:44.274 Test: blockdev write read invalid size ...passed 00:13:44.274 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:44.274 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:44.274 Test: blockdev write read max offset ...passed 00:13:44.274 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:44.274 Test: blockdev writev readv 8 blocks ...passed 00:13:44.274 Test: blockdev writev readv 30 x 1block ...passed 00:13:44.274 Test: blockdev writev readv block ...passed 00:13:44.274 Test: blockdev writev readv size > 128k ...passed 00:13:44.274 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:44.274 Test: blockdev comparev and writev ...passed 00:13:44.274 Test: blockdev nvme passthru rw ...passed 00:13:44.274 Test: blockdev nvme passthru vendor specific ...passed 00:13:44.274 Test: blockdev nvme admin passthru ...passed 00:13:44.274 Test: blockdev copy ...passed 00:13:44.274 Suite: bdevio tests on: Malloc1p0 00:13:44.274 Test: blockdev write read block ...passed 00:13:44.274 Test: blockdev write zeroes read block ...passed 00:13:44.274 Test: blockdev write zeroes read no split ...passed 00:13:44.274 Test: blockdev write zeroes read split ...passed 00:13:44.274 Test: blockdev write zeroes read split partial ...passed 00:13:44.274 Test: blockdev reset ...passed 00:13:44.274 Test: blockdev write read 8 blocks ...passed 00:13:44.274 Test: blockdev write read size > 128k ...passed 00:13:44.274 Test: blockdev write read invalid size ...passed 00:13:44.274 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:44.274 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:44.274 Test: blockdev write read max offset ...passed 00:13:44.274 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:44.274 Test: blockdev writev readv 8 blocks ...passed 00:13:44.274 Test: blockdev writev readv 30 x 1block ...passed 00:13:44.274 Test: blockdev writev readv block ...passed 00:13:44.531 Test: blockdev writev readv size > 128k ...passed 00:13:44.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:44.531 Test: blockdev comparev and writev ...passed 00:13:44.531 Test: blockdev nvme passthru rw ...passed 00:13:44.531 Test: blockdev nvme passthru vendor specific ...passed 00:13:44.531 Test: blockdev nvme admin passthru ...passed 00:13:44.531 Test: blockdev copy ...passed 00:13:44.531 Suite: bdevio tests on: Malloc0 00:13:44.531 Test: blockdev write read block ...passed 00:13:44.531 Test: blockdev write zeroes read block ...passed 00:13:44.531 Test: blockdev write zeroes read no split ...passed 00:13:44.531 Test: blockdev write zeroes read split ...passed 00:13:44.531 Test: blockdev write zeroes read split partial ...passed 00:13:44.531 Test: blockdev reset ...passed 00:13:44.531 Test: blockdev write read 8 blocks ...passed 00:13:44.531 Test: blockdev write read size > 128k ...passed 00:13:44.531 Test: blockdev write read invalid size ...passed 00:13:44.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:44.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:44.531 Test: blockdev write read max offset ...passed 00:13:44.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:44.531 Test: blockdev writev readv 8 blocks ...passed 00:13:44.531 Test: blockdev writev readv 30 x 1block ...passed 00:13:44.531 Test: blockdev writev readv block ...passed 00:13:44.531 Test: blockdev writev readv size > 128k ...passed 00:13:44.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:44.531 Test: blockdev comparev and writev ...passed 00:13:44.531 Test: blockdev nvme passthru rw ...passed 00:13:44.531 Test: blockdev nvme passthru vendor specific ...passed 00:13:44.531 Test: blockdev nvme admin passthru ...passed 00:13:44.531 Test: blockdev copy ...passed 00:13:44.531 00:13:44.531 Run Summary: Type Total Ran Passed Failed Inactive 00:13:44.531 suites 16 16 n/a 0 0 00:13:44.531 tests 368 368 368 0 0 00:13:44.531 asserts 2224 2224 2224 0 n/a 00:13:44.531 00:13:44.531 Elapsed time = 3.755 seconds 00:13:44.531 0 00:13:44.531 00:27:38 -- bdev/blockdev.sh@295 -- # killprocess 116611 00:13:44.531 00:27:38 -- common/autotest_common.sh@936 -- # '[' -z 116611 ']' 00:13:44.531 00:27:38 -- common/autotest_common.sh@940 -- # kill -0 116611 00:13:44.531 00:27:38 -- common/autotest_common.sh@941 -- # uname 00:13:44.531 00:27:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:44.531 00:27:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116611 00:13:44.531 00:27:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:44.531 killing process with pid 116611 00:13:44.531 00:27:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:44.531 00:27:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116611' 00:13:44.531 00:27:38 -- common/autotest_common.sh@955 -- # kill 116611 00:13:44.531 00:27:38 -- common/autotest_common.sh@960 -- # wait 116611 00:13:47.061 00:27:40 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:13:47.061 00:13:47.061 real 0m5.366s 00:13:47.061 user 0m13.662s 00:13:47.061 sys 0m0.597s 00:13:47.061 ************************************ 00:13:47.061 END TEST bdev_bounds 00:13:47.061 ************************************ 00:13:47.061 00:27:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:47.061 00:27:40 -- common/autotest_common.sh@10 -- # set +x 00:13:47.061 00:27:40 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:47.061 00:27:40 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:47.061 00:27:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.061 00:27:40 -- common/autotest_common.sh@10 -- # set +x 00:13:47.061 ************************************ 00:13:47.061 START TEST bdev_nbd 00:13:47.061 ************************************ 00:13:47.061 00:27:40 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:13:47.061 00:27:40 -- bdev/blockdev.sh@300 -- # uname -s 00:13:47.061 00:27:40 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:13:47.061 00:27:40 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:47.061 00:27:40 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:47.061 00:27:40 -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:47.061 00:27:40 -- bdev/blockdev.sh@304 -- # local bdev_all 00:13:47.061 00:27:40 -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:13:47.061 00:27:40 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:13:47.061 00:27:40 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:47.061 00:27:40 -- bdev/blockdev.sh@311 -- # local nbd_all 00:13:47.061 00:27:40 -- bdev/blockdev.sh@312 -- # bdev_num=16 00:13:47.061 00:27:40 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:47.061 00:27:40 -- bdev/blockdev.sh@314 -- # local nbd_list 00:13:47.061 00:27:40 -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:47.061 00:27:40 -- bdev/blockdev.sh@315 -- # local bdev_list 00:13:47.061 00:27:40 -- bdev/blockdev.sh@318 -- # nbd_pid=116718 00:13:47.061 00:27:40 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:47.061 00:27:40 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:47.061 00:27:40 -- bdev/blockdev.sh@320 -- # waitforlisten 116718 /var/tmp/spdk-nbd.sock 00:13:47.061 00:27:40 -- common/autotest_common.sh@817 -- # '[' -z 116718 ']' 00:13:47.061 00:27:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:47.061 00:27:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:47.061 00:27:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:47.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:47.061 00:27:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:47.061 00:27:40 -- common/autotest_common.sh@10 -- # set +x 00:13:47.061 [2024-04-24 00:27:40.627877] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:13:47.061 [2024-04-24 00:27:40.628145] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.061 [2024-04-24 00:27:40.808083] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.318 [2024-04-24 00:27:41.023494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.883 [2024-04-24 00:27:41.472842] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:47.883 [2024-04-24 00:27:41.472940] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:47.883 [2024-04-24 00:27:41.480817] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:47.883 [2024-04-24 00:27:41.480882] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:47.883 [2024-04-24 00:27:41.488845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:47.883 [2024-04-24 00:27:41.488902] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:47.883 [2024-04-24 00:27:41.488939] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:48.172 [2024-04-24 00:27:41.706625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:48.172 [2024-04-24 00:27:41.706743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.172 [2024-04-24 00:27:41.706782] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:48.172 [2024-04-24 00:27:41.706811] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.173 [2024-04-24 00:27:41.709358] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.173 [2024-04-24 00:27:41.709412] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:48.454 00:27:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:48.454 00:27:42 -- common/autotest_common.sh@850 -- # return 0 00:13:48.454 00:27:42 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:48.454 00:27:42 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:48.454 00:27:42 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:48.454 00:27:42 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:48.454 00:27:42 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:13:48.454 00:27:42 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:48.454 00:27:42 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:48.454 00:27:42 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:48.454 00:27:42 -- bdev/nbd_common.sh@24 -- # local i 00:13:48.454 00:27:42 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:48.454 00:27:42 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:48.454 00:27:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:48.454 00:27:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:13:48.712 00:27:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:48.712 00:27:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:48.712 00:27:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:48.712 00:27:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:13:48.712 00:27:42 -- common/autotest_common.sh@855 -- # local i 00:13:48.712 00:27:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:48.712 00:27:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:48.712 00:27:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:13:48.712 00:27:42 -- common/autotest_common.sh@859 -- # break 00:13:48.712 00:27:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:48.712 00:27:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:48.712 00:27:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.712 1+0 records in 00:13:48.712 1+0 records out 00:13:48.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279566 s, 14.7 MB/s 00:13:48.712 00:27:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.712 00:27:42 -- common/autotest_common.sh@872 -- # size=4096 00:13:48.712 00:27:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.712 00:27:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:48.712 00:27:42 -- common/autotest_common.sh@875 -- # return 0 00:13:48.712 00:27:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:48.712 00:27:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:48.712 00:27:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:13:48.983 00:27:42 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:48.983 00:27:42 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:48.983 00:27:42 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:48.983 00:27:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:13:48.983 00:27:42 -- common/autotest_common.sh@855 -- # local i 00:13:48.983 00:27:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:48.983 00:27:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:48.983 00:27:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:13:48.983 00:27:42 -- common/autotest_common.sh@859 -- # break 00:13:48.983 00:27:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:48.983 00:27:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:48.983 00:27:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.983 1+0 records in 00:13:48.983 1+0 records out 00:13:48.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407954 s, 10.0 MB/s 00:13:48.983 00:27:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.983 00:27:42 -- common/autotest_common.sh@872 -- # size=4096 00:13:48.983 00:27:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.983 00:27:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:48.983 00:27:42 -- common/autotest_common.sh@875 -- # return 0 00:13:48.983 00:27:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:48.983 00:27:42 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:48.983 00:27:42 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:13:49.549 00:27:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:49.549 00:27:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:49.549 00:27:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:49.549 00:27:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:13:49.549 00:27:43 -- common/autotest_common.sh@855 -- # local i 00:13:49.549 00:27:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:49.549 00:27:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:49.549 00:27:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:13:49.549 00:27:43 -- common/autotest_common.sh@859 -- # break 00:13:49.549 00:27:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:49.549 00:27:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:49.549 00:27:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.549 1+0 records in 00:13:49.549 1+0 records out 00:13:49.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374943 s, 10.9 MB/s 00:13:49.549 00:27:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.549 00:27:43 -- common/autotest_common.sh@872 -- # size=4096 00:13:49.549 00:27:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.549 00:27:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:49.549 00:27:43 -- common/autotest_common.sh@875 -- # return 0 00:13:49.549 00:27:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:49.549 00:27:43 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:49.549 00:27:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:13:49.807 00:27:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:49.807 00:27:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:49.807 00:27:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:49.807 00:27:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:13:49.807 00:27:43 -- common/autotest_common.sh@855 -- # local i 00:13:49.807 00:27:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:49.807 00:27:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:49.807 00:27:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:13:49.807 00:27:43 -- common/autotest_common.sh@859 -- # break 00:13:49.807 00:27:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:49.807 00:27:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:49.807 00:27:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.807 1+0 records in 00:13:49.807 1+0 records out 00:13:49.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414519 s, 9.9 MB/s 00:13:49.807 00:27:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.807 00:27:43 -- common/autotest_common.sh@872 -- # size=4096 00:13:49.807 00:27:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.807 00:27:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:49.807 00:27:43 -- common/autotest_common.sh@875 -- # return 0 00:13:49.807 00:27:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:49.807 00:27:43 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:49.807 00:27:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:13:50.065 00:27:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:50.065 00:27:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:50.065 00:27:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:50.065 00:27:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:13:50.065 00:27:43 -- common/autotest_common.sh@855 -- # local i 00:13:50.065 00:27:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:50.065 00:27:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:50.065 00:27:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:13:50.065 00:27:43 -- common/autotest_common.sh@859 -- # break 00:13:50.065 00:27:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:50.065 00:27:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:50.065 00:27:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.065 1+0 records in 00:13:50.065 1+0 records out 00:13:50.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044115 s, 9.3 MB/s 00:13:50.065 00:27:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.065 00:27:43 -- common/autotest_common.sh@872 -- # size=4096 00:13:50.065 00:27:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.065 00:27:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:50.065 00:27:43 -- common/autotest_common.sh@875 -- # return 0 00:13:50.065 00:27:43 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:50.065 00:27:43 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:50.065 00:27:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:13:50.324 00:27:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:50.324 00:27:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:50.324 00:27:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:50.324 00:27:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:13:50.324 00:27:44 -- common/autotest_common.sh@855 -- # local i 00:13:50.324 00:27:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:50.324 00:27:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:50.324 00:27:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:13:50.324 00:27:44 -- common/autotest_common.sh@859 -- # break 00:13:50.324 00:27:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:50.324 00:27:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:50.324 00:27:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.324 1+0 records in 00:13:50.324 1+0 records out 00:13:50.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498757 s, 8.2 MB/s 00:13:50.324 00:27:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.324 00:27:44 -- common/autotest_common.sh@872 -- # size=4096 00:13:50.324 00:27:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.324 00:27:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:50.324 00:27:44 -- common/autotest_common.sh@875 -- # return 0 00:13:50.324 00:27:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:50.324 00:27:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:50.324 00:27:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:13:50.890 00:27:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:50.890 00:27:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:50.890 00:27:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:50.890 00:27:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:13:50.890 00:27:44 -- common/autotest_common.sh@855 -- # local i 00:13:50.890 00:27:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:50.890 00:27:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:50.890 00:27:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:13:50.890 00:27:44 -- common/autotest_common.sh@859 -- # break 00:13:50.891 00:27:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:50.891 00:27:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:50.891 00:27:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.891 1+0 records in 00:13:50.891 1+0 records out 00:13:50.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047423 s, 8.6 MB/s 00:13:50.891 00:27:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.891 00:27:44 -- common/autotest_common.sh@872 -- # size=4096 00:13:50.891 00:27:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.891 00:27:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:50.891 00:27:44 -- common/autotest_common.sh@875 -- # return 0 00:13:50.891 00:27:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:50.891 00:27:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:50.891 00:27:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:13:51.149 00:27:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:13:51.149 00:27:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:13:51.149 00:27:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:13:51.149 00:27:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd7 00:13:51.149 00:27:44 -- common/autotest_common.sh@855 -- # local i 00:13:51.149 00:27:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:51.149 00:27:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:51.149 00:27:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd7 /proc/partitions 00:13:51.149 00:27:44 -- common/autotest_common.sh@859 -- # break 00:13:51.149 00:27:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:51.149 00:27:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:51.149 00:27:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.149 1+0 records in 00:13:51.149 1+0 records out 00:13:51.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523949 s, 7.8 MB/s 00:13:51.149 00:27:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.149 00:27:44 -- common/autotest_common.sh@872 -- # size=4096 00:13:51.149 00:27:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.149 00:27:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:51.149 00:27:44 -- common/autotest_common.sh@875 -- # return 0 00:13:51.149 00:27:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:51.149 00:27:44 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:51.149 00:27:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:13:51.407 00:27:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:13:51.407 00:27:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:13:51.407 00:27:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:13:51.407 00:27:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd8 00:13:51.407 00:27:45 -- common/autotest_common.sh@855 -- # local i 00:13:51.407 00:27:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:51.407 00:27:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:51.407 00:27:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd8 /proc/partitions 00:13:51.407 00:27:45 -- common/autotest_common.sh@859 -- # break 00:13:51.407 00:27:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:51.407 00:27:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:51.407 00:27:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.407 1+0 records in 00:13:51.407 1+0 records out 00:13:51.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00080283 s, 5.1 MB/s 00:13:51.407 00:27:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.407 00:27:45 -- common/autotest_common.sh@872 -- # size=4096 00:13:51.407 00:27:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.407 00:27:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:51.407 00:27:45 -- common/autotest_common.sh@875 -- # return 0 00:13:51.407 00:27:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:51.407 00:27:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:51.407 00:27:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:13:51.667 00:27:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:13:51.667 00:27:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:13:51.667 00:27:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:13:51.667 00:27:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd9 00:13:51.667 00:27:45 -- common/autotest_common.sh@855 -- # local i 00:13:51.667 00:27:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:51.667 00:27:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:51.667 00:27:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd9 /proc/partitions 00:13:51.667 00:27:45 -- common/autotest_common.sh@859 -- # break 00:13:51.667 00:27:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:51.667 00:27:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:51.668 00:27:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.668 1+0 records in 00:13:51.668 1+0 records out 00:13:51.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000729265 s, 5.6 MB/s 00:13:51.668 00:27:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.668 00:27:45 -- common/autotest_common.sh@872 -- # size=4096 00:13:51.668 00:27:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.668 00:27:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:51.668 00:27:45 -- common/autotest_common.sh@875 -- # return 0 00:13:51.668 00:27:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:51.668 00:27:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:51.668 00:27:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:13:52.236 00:27:45 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:13:52.236 00:27:45 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:13:52.236 00:27:45 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:13:52.236 00:27:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:13:52.236 00:27:45 -- common/autotest_common.sh@855 -- # local i 00:13:52.236 00:27:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:52.236 00:27:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:52.236 00:27:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:13:52.236 00:27:45 -- common/autotest_common.sh@859 -- # break 00:13:52.236 00:27:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:52.236 00:27:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:52.236 00:27:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:52.236 1+0 records in 00:13:52.236 1+0 records out 00:13:52.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512669 s, 8.0 MB/s 00:13:52.236 00:27:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.236 00:27:45 -- common/autotest_common.sh@872 -- # size=4096 00:13:52.236 00:27:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.236 00:27:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:52.236 00:27:45 -- common/autotest_common.sh@875 -- # return 0 00:13:52.236 00:27:45 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:52.236 00:27:45 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:52.236 00:27:45 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:13:52.236 00:27:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:13:52.236 00:27:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:13:52.236 00:27:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:13:52.236 00:27:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:13:52.236 00:27:46 -- common/autotest_common.sh@855 -- # local i 00:13:52.236 00:27:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:52.237 00:27:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:52.237 00:27:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:13:52.494 00:27:46 -- common/autotest_common.sh@859 -- # break 00:13:52.494 00:27:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:52.494 00:27:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:52.494 00:27:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:52.494 1+0 records in 00:13:52.494 1+0 records out 00:13:52.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653724 s, 6.3 MB/s 00:13:52.494 00:27:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.494 00:27:46 -- common/autotest_common.sh@872 -- # size=4096 00:13:52.494 00:27:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.494 00:27:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:52.494 00:27:46 -- common/autotest_common.sh@875 -- # return 0 00:13:52.494 00:27:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:52.494 00:27:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:52.494 00:27:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:13:52.752 00:27:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:13:52.752 00:27:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:13:52.752 00:27:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:13:52.752 00:27:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:13:52.752 00:27:46 -- common/autotest_common.sh@855 -- # local i 00:13:52.752 00:27:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:52.752 00:27:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:52.752 00:27:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:13:52.752 00:27:46 -- common/autotest_common.sh@859 -- # break 00:13:52.752 00:27:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:52.752 00:27:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:52.752 00:27:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:52.752 1+0 records in 00:13:52.753 1+0 records out 00:13:52.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000751327 s, 5.5 MB/s 00:13:52.753 00:27:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.753 00:27:46 -- common/autotest_common.sh@872 -- # size=4096 00:13:52.753 00:27:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.753 00:27:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:52.753 00:27:46 -- common/autotest_common.sh@875 -- # return 0 00:13:52.753 00:27:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:52.753 00:27:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:52.753 00:27:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:13:53.011 00:27:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:13:53.011 00:27:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:13:53.011 00:27:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:13:53.011 00:27:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:13:53.011 00:27:46 -- common/autotest_common.sh@855 -- # local i 00:13:53.011 00:27:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:53.011 00:27:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:53.011 00:27:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:13:53.011 00:27:46 -- common/autotest_common.sh@859 -- # break 00:13:53.011 00:27:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:53.011 00:27:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:53.011 00:27:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.011 1+0 records in 00:13:53.011 1+0 records out 00:13:53.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126739 s, 3.2 MB/s 00:13:53.011 00:27:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.011 00:27:46 -- common/autotest_common.sh@872 -- # size=4096 00:13:53.011 00:27:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.011 00:27:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:53.011 00:27:46 -- common/autotest_common.sh@875 -- # return 0 00:13:53.011 00:27:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:53.011 00:27:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:53.011 00:27:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:13:53.269 00:27:46 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:13:53.269 00:27:46 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:13:53.269 00:27:46 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:13:53.269 00:27:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:13:53.269 00:27:46 -- common/autotest_common.sh@855 -- # local i 00:13:53.269 00:27:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:53.269 00:27:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:53.269 00:27:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:13:53.269 00:27:46 -- common/autotest_common.sh@859 -- # break 00:13:53.269 00:27:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:53.269 00:27:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:53.269 00:27:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.269 1+0 records in 00:13:53.269 1+0 records out 00:13:53.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000955694 s, 4.3 MB/s 00:13:53.269 00:27:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.269 00:27:46 -- common/autotest_common.sh@872 -- # size=4096 00:13:53.269 00:27:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.269 00:27:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:53.269 00:27:46 -- common/autotest_common.sh@875 -- # return 0 00:13:53.269 00:27:46 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:53.269 00:27:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:53.269 00:27:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:13:53.527 00:27:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:13:53.527 00:27:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:13:53.527 00:27:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:13:53.527 00:27:47 -- common/autotest_common.sh@854 -- # local nbd_name=nbd15 00:13:53.527 00:27:47 -- common/autotest_common.sh@855 -- # local i 00:13:53.527 00:27:47 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:53.527 00:27:47 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:53.527 00:27:47 -- common/autotest_common.sh@858 -- # grep -q -w nbd15 /proc/partitions 00:13:53.527 00:27:47 -- common/autotest_common.sh@859 -- # break 00:13:53.527 00:27:47 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:53.527 00:27:47 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:53.527 00:27:47 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.527 1+0 records in 00:13:53.527 1+0 records out 00:13:53.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114895 s, 3.6 MB/s 00:13:53.527 00:27:47 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.528 00:27:47 -- common/autotest_common.sh@872 -- # size=4096 00:13:53.528 00:27:47 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.528 00:27:47 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:53.528 00:27:47 -- common/autotest_common.sh@875 -- # return 0 00:13:53.528 00:27:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:53.528 00:27:47 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:53.528 00:27:47 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:53.786 00:27:47 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd0", 00:13:53.786 "bdev_name": "Malloc0" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd1", 00:13:53.786 "bdev_name": "Malloc1p0" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd2", 00:13:53.786 "bdev_name": "Malloc1p1" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd3", 00:13:53.786 "bdev_name": "Malloc2p0" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd4", 00:13:53.786 "bdev_name": "Malloc2p1" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd5", 00:13:53.786 "bdev_name": "Malloc2p2" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd6", 00:13:53.786 "bdev_name": "Malloc2p3" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd7", 00:13:53.786 "bdev_name": "Malloc2p4" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd8", 00:13:53.786 "bdev_name": "Malloc2p5" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd9", 00:13:53.786 "bdev_name": "Malloc2p6" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd10", 00:13:53.786 "bdev_name": "Malloc2p7" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd11", 00:13:53.786 "bdev_name": "TestPT" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd12", 00:13:53.786 "bdev_name": "raid0" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd13", 00:13:53.786 "bdev_name": "concat0" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd14", 00:13:53.786 "bdev_name": "raid1" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd15", 00:13:53.786 "bdev_name": "AIO0" 00:13:53.786 } 00:13:53.786 ]' 00:13:53.786 00:27:47 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:53.786 00:27:47 -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd0", 00:13:53.786 "bdev_name": "Malloc0" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd1", 00:13:53.786 "bdev_name": "Malloc1p0" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd2", 00:13:53.786 "bdev_name": "Malloc1p1" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd3", 00:13:53.786 "bdev_name": "Malloc2p0" 00:13:53.786 }, 00:13:53.786 { 00:13:53.786 "nbd_device": "/dev/nbd4", 00:13:53.786 "bdev_name": "Malloc2p1" 00:13:53.786 }, 00:13:53.787 { 00:13:53.787 "nbd_device": "/dev/nbd5", 00:13:53.787 "bdev_name": "Malloc2p2" 00:13:53.787 }, 00:13:53.787 { 00:13:53.787 "nbd_device": "/dev/nbd6", 00:13:53.787 "bdev_name": "Malloc2p3" 00:13:53.787 }, 00:13:53.787 { 00:13:53.787 "nbd_device": "/dev/nbd7", 00:13:53.787 "bdev_name": "Malloc2p4" 00:13:53.787 }, 00:13:53.787 { 00:13:53.787 "nbd_device": "/dev/nbd8", 00:13:53.787 "bdev_name": "Malloc2p5" 00:13:53.787 }, 00:13:53.787 { 00:13:53.787 "nbd_device": "/dev/nbd9", 00:13:53.787 "bdev_name": "Malloc2p6" 00:13:53.787 }, 00:13:53.787 { 00:13:53.787 "nbd_device": "/dev/nbd10", 00:13:53.787 "bdev_name": "Malloc2p7" 00:13:53.787 }, 00:13:53.787 { 00:13:53.787 "nbd_device": "/dev/nbd11", 00:13:53.787 "bdev_name": "TestPT" 00:13:53.787 }, 00:13:53.787 { 00:13:53.787 "nbd_device": "/dev/nbd12", 00:13:53.787 "bdev_name": "raid0" 00:13:53.787 }, 00:13:53.787 { 00:13:53.787 "nbd_device": "/dev/nbd13", 00:13:53.787 "bdev_name": "concat0" 00:13:53.787 }, 00:13:53.787 { 00:13:53.787 "nbd_device": "/dev/nbd14", 00:13:53.787 "bdev_name": "raid1" 00:13:53.787 }, 00:13:53.787 { 00:13:53.787 "nbd_device": "/dev/nbd15", 00:13:53.787 "bdev_name": "AIO0" 00:13:53.787 } 00:13:53.787 ]' 00:13:53.787 00:27:47 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:54.045 00:27:47 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:13:54.045 00:27:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:54.045 00:27:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:13:54.045 00:27:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:54.045 00:27:47 -- bdev/nbd_common.sh@51 -- # local i 00:13:54.045 00:27:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.045 00:27:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:54.045 00:27:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:54.303 00:27:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:54.303 00:27:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:54.303 00:27:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.303 00:27:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.303 00:27:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:54.303 00:27:47 -- bdev/nbd_common.sh@41 -- # break 00:13:54.303 00:27:47 -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.303 00:27:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.303 00:27:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:54.561 00:27:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:54.561 00:27:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:54.561 00:27:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:54.561 00:27:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.561 00:27:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.561 00:27:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:54.561 00:27:48 -- bdev/nbd_common.sh@41 -- # break 00:13:54.561 00:27:48 -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.561 00:27:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.561 00:27:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:54.819 00:27:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:54.819 00:27:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:54.819 00:27:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:54.819 00:27:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.819 00:27:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.819 00:27:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:54.819 00:27:48 -- bdev/nbd_common.sh@41 -- # break 00:13:54.819 00:27:48 -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.819 00:27:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.819 00:27:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:55.077 00:27:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:55.077 00:27:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:55.077 00:27:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:55.077 00:27:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.077 00:27:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.077 00:27:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:55.077 00:27:48 -- bdev/nbd_common.sh@41 -- # break 00:13:55.077 00:27:48 -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.077 00:27:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.077 00:27:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:55.335 00:27:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:55.335 00:27:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:55.335 00:27:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:55.335 00:27:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.335 00:27:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.335 00:27:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:55.335 00:27:49 -- bdev/nbd_common.sh@41 -- # break 00:13:55.335 00:27:49 -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.335 00:27:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.335 00:27:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:55.603 00:27:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:55.603 00:27:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:55.603 00:27:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:55.603 00:27:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.603 00:27:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.603 00:27:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:55.603 00:27:49 -- bdev/nbd_common.sh@41 -- # break 00:13:55.603 00:27:49 -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.603 00:27:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.603 00:27:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:55.861 00:27:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:55.862 00:27:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:55.862 00:27:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:55.862 00:27:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.862 00:27:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.862 00:27:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:55.862 00:27:49 -- bdev/nbd_common.sh@41 -- # break 00:13:55.862 00:27:49 -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.862 00:27:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.862 00:27:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:56.119 00:27:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:56.119 00:27:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:56.119 00:27:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:56.119 00:27:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.119 00:27:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.119 00:27:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:56.119 00:27:49 -- bdev/nbd_common.sh@41 -- # break 00:13:56.119 00:27:49 -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.119 00:27:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.119 00:27:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:56.377 00:27:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:56.377 00:27:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:56.377 00:27:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:56.377 00:27:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.377 00:27:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.377 00:27:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:56.377 00:27:49 -- bdev/nbd_common.sh@41 -- # break 00:13:56.377 00:27:49 -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.377 00:27:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.377 00:27:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:56.634 00:27:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:56.634 00:27:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:56.634 00:27:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:56.634 00:27:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.634 00:27:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.634 00:27:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:56.634 00:27:50 -- bdev/nbd_common.sh@41 -- # break 00:13:56.634 00:27:50 -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.634 00:27:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.634 00:27:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:56.925 00:27:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:56.925 00:27:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:56.925 00:27:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:56.925 00:27:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.925 00:27:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.925 00:27:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:56.925 00:27:50 -- bdev/nbd_common.sh@41 -- # break 00:13:56.925 00:27:50 -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.925 00:27:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.925 00:27:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:57.182 00:27:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:57.182 00:27:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:57.182 00:27:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:57.182 00:27:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.182 00:27:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.182 00:27:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:57.182 00:27:50 -- bdev/nbd_common.sh@41 -- # break 00:13:57.182 00:27:50 -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.182 00:27:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.182 00:27:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@41 -- # break 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@41 -- # break 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.750 00:27:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:58.316 00:27:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:58.316 00:27:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:58.316 00:27:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:58.316 00:27:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.316 00:27:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.316 00:27:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:58.316 00:27:51 -- bdev/nbd_common.sh@41 -- # break 00:13:58.316 00:27:51 -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.316 00:27:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.316 00:27:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:58.574 00:27:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:58.574 00:27:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:58.574 00:27:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:58.574 00:27:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.574 00:27:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.574 00:27:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:58.574 00:27:52 -- bdev/nbd_common.sh@41 -- # break 00:13:58.574 00:27:52 -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.574 00:27:52 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:58.574 00:27:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:58.574 00:27:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:58.832 00:27:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@65 -- # true 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@65 -- # count=0 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@122 -- # count=0 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@127 -- # return 0 00:13:58.833 00:27:52 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@12 -- # local i 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:58.833 00:27:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:59.091 /dev/nbd0 00:13:59.091 00:27:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:59.091 00:27:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:59.091 00:27:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:13:59.091 00:27:52 -- common/autotest_common.sh@855 -- # local i 00:13:59.091 00:27:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:59.091 00:27:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:59.091 00:27:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:13:59.091 00:27:52 -- common/autotest_common.sh@859 -- # break 00:13:59.091 00:27:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:59.091 00:27:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:59.091 00:27:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.091 1+0 records in 00:13:59.091 1+0 records out 00:13:59.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378558 s, 10.8 MB/s 00:13:59.091 00:27:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.400 00:27:52 -- common/autotest_common.sh@872 -- # size=4096 00:13:59.400 00:27:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.400 00:27:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:59.400 00:27:52 -- common/autotest_common.sh@875 -- # return 0 00:13:59.400 00:27:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.400 00:27:52 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:59.400 00:27:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:13:59.400 /dev/nbd1 00:13:59.694 00:27:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:59.694 00:27:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:59.694 00:27:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:13:59.694 00:27:53 -- common/autotest_common.sh@855 -- # local i 00:13:59.694 00:27:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:59.694 00:27:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:59.694 00:27:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:13:59.694 00:27:53 -- common/autotest_common.sh@859 -- # break 00:13:59.694 00:27:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:59.694 00:27:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:59.694 00:27:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.694 1+0 records in 00:13:59.694 1+0 records out 00:13:59.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264702 s, 15.5 MB/s 00:13:59.694 00:27:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.694 00:27:53 -- common/autotest_common.sh@872 -- # size=4096 00:13:59.694 00:27:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.694 00:27:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:59.694 00:27:53 -- common/autotest_common.sh@875 -- # return 0 00:13:59.694 00:27:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.694 00:27:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:59.694 00:27:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:13:59.694 /dev/nbd10 00:13:59.694 00:27:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:59.694 00:27:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:59.694 00:27:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:13:59.694 00:27:53 -- common/autotest_common.sh@855 -- # local i 00:13:59.694 00:27:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:59.694 00:27:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:59.694 00:27:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:13:59.694 00:27:53 -- common/autotest_common.sh@859 -- # break 00:13:59.694 00:27:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:59.694 00:27:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:59.694 00:27:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.694 1+0 records in 00:13:59.694 1+0 records out 00:13:59.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561889 s, 7.3 MB/s 00:13:59.694 00:27:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.694 00:27:53 -- common/autotest_common.sh@872 -- # size=4096 00:13:59.694 00:27:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.694 00:27:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:59.694 00:27:53 -- common/autotest_common.sh@875 -- # return 0 00:13:59.694 00:27:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.694 00:27:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:59.694 00:27:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:13:59.953 /dev/nbd11 00:13:59.953 00:27:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:59.953 00:27:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:59.953 00:27:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:13:59.953 00:27:53 -- common/autotest_common.sh@855 -- # local i 00:13:59.953 00:27:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:59.953 00:27:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:59.953 00:27:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:13:59.953 00:27:53 -- common/autotest_common.sh@859 -- # break 00:13:59.953 00:27:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:59.953 00:27:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:59.953 00:27:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.953 1+0 records in 00:13:59.953 1+0 records out 00:13:59.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462002 s, 8.9 MB/s 00:13:59.953 00:27:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.953 00:27:53 -- common/autotest_common.sh@872 -- # size=4096 00:13:59.953 00:27:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.953 00:27:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:59.953 00:27:53 -- common/autotest_common.sh@875 -- # return 0 00:13:59.953 00:27:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.953 00:27:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:59.953 00:27:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:14:00.210 /dev/nbd12 00:14:00.210 00:27:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:00.210 00:27:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:00.210 00:27:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:14:00.210 00:27:53 -- common/autotest_common.sh@855 -- # local i 00:14:00.210 00:27:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:00.210 00:27:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:00.210 00:27:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:14:00.210 00:27:53 -- common/autotest_common.sh@859 -- # break 00:14:00.210 00:27:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:00.210 00:27:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:00.210 00:27:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.210 1+0 records in 00:14:00.210 1+0 records out 00:14:00.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477907 s, 8.6 MB/s 00:14:00.210 00:27:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.210 00:27:53 -- common/autotest_common.sh@872 -- # size=4096 00:14:00.210 00:27:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.210 00:27:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:00.210 00:27:53 -- common/autotest_common.sh@875 -- # return 0 00:14:00.210 00:27:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.210 00:27:53 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:00.210 00:27:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:14:00.775 /dev/nbd13 00:14:00.775 00:27:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:00.775 00:27:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:00.775 00:27:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:14:00.775 00:27:54 -- common/autotest_common.sh@855 -- # local i 00:14:00.775 00:27:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:00.775 00:27:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:00.775 00:27:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:14:00.775 00:27:54 -- common/autotest_common.sh@859 -- # break 00:14:00.775 00:27:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:00.775 00:27:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:00.775 00:27:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.775 1+0 records in 00:14:00.775 1+0 records out 00:14:00.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510894 s, 8.0 MB/s 00:14:00.775 00:27:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.775 00:27:54 -- common/autotest_common.sh@872 -- # size=4096 00:14:00.775 00:27:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.775 00:27:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:00.775 00:27:54 -- common/autotest_common.sh@875 -- # return 0 00:14:00.775 00:27:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.775 00:27:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:00.775 00:27:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:14:01.033 /dev/nbd14 00:14:01.034 00:27:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:14:01.034 00:27:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:14:01.034 00:27:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:14:01.034 00:27:54 -- common/autotest_common.sh@855 -- # local i 00:14:01.034 00:27:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:01.034 00:27:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:01.034 00:27:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:14:01.034 00:27:54 -- common/autotest_common.sh@859 -- # break 00:14:01.034 00:27:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:01.034 00:27:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:01.034 00:27:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.034 1+0 records in 00:14:01.034 1+0 records out 00:14:01.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505967 s, 8.1 MB/s 00:14:01.034 00:27:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.034 00:27:54 -- common/autotest_common.sh@872 -- # size=4096 00:14:01.034 00:27:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.034 00:27:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:01.034 00:27:54 -- common/autotest_common.sh@875 -- # return 0 00:14:01.034 00:27:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.034 00:27:54 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:01.034 00:27:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:14:01.304 /dev/nbd15 00:14:01.304 00:27:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:14:01.304 00:27:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:14:01.304 00:27:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd15 00:14:01.304 00:27:55 -- common/autotest_common.sh@855 -- # local i 00:14:01.304 00:27:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:01.304 00:27:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:01.304 00:27:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd15 /proc/partitions 00:14:01.304 00:27:55 -- common/autotest_common.sh@859 -- # break 00:14:01.304 00:27:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:01.304 00:27:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:01.304 00:27:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.304 1+0 records in 00:14:01.304 1+0 records out 00:14:01.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489576 s, 8.4 MB/s 00:14:01.304 00:27:55 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.304 00:27:55 -- common/autotest_common.sh@872 -- # size=4096 00:14:01.304 00:27:55 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.304 00:27:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:01.304 00:27:55 -- common/autotest_common.sh@875 -- # return 0 00:14:01.304 00:27:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.304 00:27:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:01.304 00:27:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:14:01.561 /dev/nbd2 00:14:01.561 00:27:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:14:01.561 00:27:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:14:01.561 00:27:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:14:01.561 00:27:55 -- common/autotest_common.sh@855 -- # local i 00:14:01.561 00:27:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:01.561 00:27:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:01.561 00:27:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:14:01.561 00:27:55 -- common/autotest_common.sh@859 -- # break 00:14:01.561 00:27:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:01.561 00:27:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:01.561 00:27:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.561 1+0 records in 00:14:01.561 1+0 records out 00:14:01.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584852 s, 7.0 MB/s 00:14:01.561 00:27:55 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.561 00:27:55 -- common/autotest_common.sh@872 -- # size=4096 00:14:01.561 00:27:55 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.561 00:27:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:01.561 00:27:55 -- common/autotest_common.sh@875 -- # return 0 00:14:01.562 00:27:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.562 00:27:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:01.562 00:27:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:14:02.127 /dev/nbd3 00:14:02.127 00:27:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:14:02.127 00:27:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:14:02.127 00:27:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:14:02.127 00:27:55 -- common/autotest_common.sh@855 -- # local i 00:14:02.127 00:27:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:02.127 00:27:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:02.127 00:27:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:14:02.127 00:27:55 -- common/autotest_common.sh@859 -- # break 00:14:02.127 00:27:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:02.127 00:27:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:02.127 00:27:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.127 1+0 records in 00:14:02.127 1+0 records out 00:14:02.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568526 s, 7.2 MB/s 00:14:02.127 00:27:55 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.127 00:27:55 -- common/autotest_common.sh@872 -- # size=4096 00:14:02.127 00:27:55 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.127 00:27:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:02.127 00:27:55 -- common/autotest_common.sh@875 -- # return 0 00:14:02.127 00:27:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.127 00:27:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:02.127 00:27:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:14:02.386 /dev/nbd4 00:14:02.386 00:27:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:14:02.386 00:27:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:14:02.386 00:27:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:14:02.386 00:27:55 -- common/autotest_common.sh@855 -- # local i 00:14:02.386 00:27:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:02.386 00:27:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:02.386 00:27:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:14:02.386 00:27:55 -- common/autotest_common.sh@859 -- # break 00:14:02.386 00:27:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:02.386 00:27:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:02.386 00:27:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.386 1+0 records in 00:14:02.386 1+0 records out 00:14:02.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629034 s, 6.5 MB/s 00:14:02.386 00:27:55 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.386 00:27:55 -- common/autotest_common.sh@872 -- # size=4096 00:14:02.386 00:27:55 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.386 00:27:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:02.386 00:27:55 -- common/autotest_common.sh@875 -- # return 0 00:14:02.386 00:27:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.386 00:27:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:02.386 00:27:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:14:02.386 /dev/nbd5 00:14:02.643 00:27:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:14:02.643 00:27:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:14:02.643 00:27:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:14:02.643 00:27:56 -- common/autotest_common.sh@855 -- # local i 00:14:02.643 00:27:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:02.643 00:27:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:02.643 00:27:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:14:02.643 00:27:56 -- common/autotest_common.sh@859 -- # break 00:14:02.643 00:27:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:02.643 00:27:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:02.643 00:27:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.643 1+0 records in 00:14:02.643 1+0 records out 00:14:02.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000819706 s, 5.0 MB/s 00:14:02.643 00:27:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.643 00:27:56 -- common/autotest_common.sh@872 -- # size=4096 00:14:02.643 00:27:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.643 00:27:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:02.643 00:27:56 -- common/autotest_common.sh@875 -- # return 0 00:14:02.643 00:27:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.643 00:27:56 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:02.643 00:27:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:14:02.900 /dev/nbd6 00:14:02.900 00:27:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:14:02.900 00:27:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:14:02.900 00:27:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:14:02.900 00:27:56 -- common/autotest_common.sh@855 -- # local i 00:14:02.900 00:27:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:02.900 00:27:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:02.900 00:27:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:14:02.900 00:27:56 -- common/autotest_common.sh@859 -- # break 00:14:02.900 00:27:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:02.900 00:27:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:02.900 00:27:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.900 1+0 records in 00:14:02.900 1+0 records out 00:14:02.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000927448 s, 4.4 MB/s 00:14:02.900 00:27:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.900 00:27:56 -- common/autotest_common.sh@872 -- # size=4096 00:14:02.900 00:27:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.900 00:27:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:02.900 00:27:56 -- common/autotest_common.sh@875 -- # return 0 00:14:02.900 00:27:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.900 00:27:56 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:02.900 00:27:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:14:03.157 /dev/nbd7 00:14:03.157 00:27:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:14:03.157 00:27:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:14:03.157 00:27:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd7 00:14:03.157 00:27:56 -- common/autotest_common.sh@855 -- # local i 00:14:03.157 00:27:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:03.157 00:27:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:03.157 00:27:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd7 /proc/partitions 00:14:03.157 00:27:56 -- common/autotest_common.sh@859 -- # break 00:14:03.157 00:27:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:03.157 00:27:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:03.157 00:27:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.157 1+0 records in 00:14:03.157 1+0 records out 00:14:03.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000900969 s, 4.5 MB/s 00:14:03.157 00:27:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.157 00:27:56 -- common/autotest_common.sh@872 -- # size=4096 00:14:03.157 00:27:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.157 00:27:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:03.157 00:27:56 -- common/autotest_common.sh@875 -- # return 0 00:14:03.157 00:27:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.157 00:27:56 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:03.157 00:27:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:14:03.743 /dev/nbd8 00:14:03.743 00:27:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:14:03.743 00:27:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:14:03.743 00:27:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd8 00:14:03.743 00:27:57 -- common/autotest_common.sh@855 -- # local i 00:14:03.743 00:27:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:03.743 00:27:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:03.743 00:27:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd8 /proc/partitions 00:14:03.743 00:27:57 -- common/autotest_common.sh@859 -- # break 00:14:03.743 00:27:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:03.743 00:27:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:03.743 00:27:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.743 1+0 records in 00:14:03.743 1+0 records out 00:14:03.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000869705 s, 4.7 MB/s 00:14:03.743 00:27:57 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.743 00:27:57 -- common/autotest_common.sh@872 -- # size=4096 00:14:03.743 00:27:57 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.743 00:27:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:03.743 00:27:57 -- common/autotest_common.sh@875 -- # return 0 00:14:03.743 00:27:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.743 00:27:57 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:03.743 00:27:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:14:04.000 /dev/nbd9 00:14:04.000 00:27:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:14:04.000 00:27:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:14:04.000 00:27:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd9 00:14:04.000 00:27:57 -- common/autotest_common.sh@855 -- # local i 00:14:04.000 00:27:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:04.000 00:27:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:04.000 00:27:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd9 /proc/partitions 00:14:04.000 00:27:57 -- common/autotest_common.sh@859 -- # break 00:14:04.000 00:27:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:04.000 00:27:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:04.000 00:27:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.000 1+0 records in 00:14:04.000 1+0 records out 00:14:04.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131307 s, 3.1 MB/s 00:14:04.000 00:27:57 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.000 00:27:57 -- common/autotest_common.sh@872 -- # size=4096 00:14:04.000 00:27:57 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:04.000 00:27:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:04.000 00:27:57 -- common/autotest_common.sh@875 -- # return 0 00:14:04.000 00:27:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.000 00:27:57 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:14:04.000 00:27:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:04.000 00:27:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:04.000 00:27:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:04.258 00:27:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd0", 00:14:04.258 "bdev_name": "Malloc0" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd1", 00:14:04.258 "bdev_name": "Malloc1p0" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd10", 00:14:04.258 "bdev_name": "Malloc1p1" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd11", 00:14:04.258 "bdev_name": "Malloc2p0" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd12", 00:14:04.258 "bdev_name": "Malloc2p1" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd13", 00:14:04.258 "bdev_name": "Malloc2p2" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd14", 00:14:04.258 "bdev_name": "Malloc2p3" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd15", 00:14:04.258 "bdev_name": "Malloc2p4" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd2", 00:14:04.258 "bdev_name": "Malloc2p5" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd3", 00:14:04.258 "bdev_name": "Malloc2p6" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd4", 00:14:04.258 "bdev_name": "Malloc2p7" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd5", 00:14:04.258 "bdev_name": "TestPT" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd6", 00:14:04.258 "bdev_name": "raid0" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd7", 00:14:04.258 "bdev_name": "concat0" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd8", 00:14:04.258 "bdev_name": "raid1" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd9", 00:14:04.258 "bdev_name": "AIO0" 00:14:04.258 } 00:14:04.258 ]' 00:14:04.258 00:27:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd0", 00:14:04.258 "bdev_name": "Malloc0" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd1", 00:14:04.258 "bdev_name": "Malloc1p0" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd10", 00:14:04.258 "bdev_name": "Malloc1p1" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd11", 00:14:04.258 "bdev_name": "Malloc2p0" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd12", 00:14:04.258 "bdev_name": "Malloc2p1" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd13", 00:14:04.258 "bdev_name": "Malloc2p2" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd14", 00:14:04.258 "bdev_name": "Malloc2p3" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd15", 00:14:04.258 "bdev_name": "Malloc2p4" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd2", 00:14:04.258 "bdev_name": "Malloc2p5" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd3", 00:14:04.258 "bdev_name": "Malloc2p6" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd4", 00:14:04.258 "bdev_name": "Malloc2p7" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd5", 00:14:04.258 "bdev_name": "TestPT" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd6", 00:14:04.258 "bdev_name": "raid0" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd7", 00:14:04.258 "bdev_name": "concat0" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd8", 00:14:04.258 "bdev_name": "raid1" 00:14:04.258 }, 00:14:04.258 { 00:14:04.258 "nbd_device": "/dev/nbd9", 00:14:04.258 "bdev_name": "AIO0" 00:14:04.258 } 00:14:04.258 ]' 00:14:04.258 00:27:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:04.258 00:27:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:04.258 /dev/nbd1 00:14:04.258 /dev/nbd10 00:14:04.258 /dev/nbd11 00:14:04.258 /dev/nbd12 00:14:04.258 /dev/nbd13 00:14:04.258 /dev/nbd14 00:14:04.258 /dev/nbd15 00:14:04.258 /dev/nbd2 00:14:04.258 /dev/nbd3 00:14:04.258 /dev/nbd4 00:14:04.258 /dev/nbd5 00:14:04.258 /dev/nbd6 00:14:04.258 /dev/nbd7 00:14:04.258 /dev/nbd8 00:14:04.258 /dev/nbd9' 00:14:04.258 00:27:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:04.258 /dev/nbd1 00:14:04.258 /dev/nbd10 00:14:04.258 /dev/nbd11 00:14:04.258 /dev/nbd12 00:14:04.258 /dev/nbd13 00:14:04.258 /dev/nbd14 00:14:04.259 /dev/nbd15 00:14:04.259 /dev/nbd2 00:14:04.259 /dev/nbd3 00:14:04.259 /dev/nbd4 00:14:04.259 /dev/nbd5 00:14:04.259 /dev/nbd6 00:14:04.259 /dev/nbd7 00:14:04.259 /dev/nbd8 00:14:04.259 /dev/nbd9' 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@65 -- # count=16 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@66 -- # echo 16 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@95 -- # count=16 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:04.259 256+0 records in 00:14:04.259 256+0 records out 00:14:04.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00520085 s, 202 MB/s 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:04.259 00:27:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:04.516 256+0 records in 00:14:04.516 256+0 records out 00:14:04.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223935 s, 4.7 MB/s 00:14:04.516 00:27:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:04.516 00:27:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:04.773 256+0 records in 00:14:04.773 256+0 records out 00:14:04.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.217426 s, 4.8 MB/s 00:14:04.773 00:27:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:04.773 00:27:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:05.031 256+0 records in 00:14:05.031 256+0 records out 00:14:05.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.221916 s, 4.7 MB/s 00:14:05.031 00:27:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:05.031 00:27:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:05.287 256+0 records in 00:14:05.287 256+0 records out 00:14:05.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.230527 s, 4.5 MB/s 00:14:05.287 00:27:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:05.287 00:27:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:05.545 256+0 records in 00:14:05.545 256+0 records out 00:14:05.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222264 s, 4.7 MB/s 00:14:05.545 00:27:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:05.545 00:27:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:05.802 256+0 records in 00:14:05.802 256+0 records out 00:14:05.802 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.265847 s, 3.9 MB/s 00:14:05.802 00:27:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:05.802 00:27:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:14:06.059 256+0 records in 00:14:06.059 256+0 records out 00:14:06.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209124 s, 5.0 MB/s 00:14:06.059 00:27:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:06.059 00:27:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:14:06.059 256+0 records in 00:14:06.059 256+0 records out 00:14:06.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191086 s, 5.5 MB/s 00:14:06.059 00:27:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:06.059 00:27:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:14:06.317 256+0 records in 00:14:06.317 256+0 records out 00:14:06.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191919 s, 5.5 MB/s 00:14:06.317 00:28:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:06.317 00:28:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:14:06.574 256+0 records in 00:14:06.574 256+0 records out 00:14:06.574 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.19987 s, 5.2 MB/s 00:14:06.574 00:28:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:06.574 00:28:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:14:06.834 256+0 records in 00:14:06.834 256+0 records out 00:14:06.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18999 s, 5.5 MB/s 00:14:06.834 00:28:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:06.834 00:28:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:14:06.834 256+0 records in 00:14:06.834 256+0 records out 00:14:06.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185311 s, 5.7 MB/s 00:14:06.834 00:28:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:06.834 00:28:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:14:07.155 256+0 records in 00:14:07.155 256+0 records out 00:14:07.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.202553 s, 5.2 MB/s 00:14:07.155 00:28:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.155 00:28:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:14:07.413 256+0 records in 00:14:07.413 256+0 records out 00:14:07.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196287 s, 5.3 MB/s 00:14:07.413 00:28:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.413 00:28:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:14:07.678 256+0 records in 00:14:07.678 256+0 records out 00:14:07.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196358 s, 5.3 MB/s 00:14:07.678 00:28:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.678 00:28:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:14:07.999 256+0 records in 00:14:07.999 256+0 records out 00:14:07.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.28168 s, 3.7 MB/s 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@51 -- # local i 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.999 00:28:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:08.258 00:28:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.258 00:28:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.258 00:28:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.258 00:28:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.258 00:28:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.258 00:28:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.258 00:28:02 -- bdev/nbd_common.sh@41 -- # break 00:14:08.258 00:28:02 -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.258 00:28:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.258 00:28:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:08.825 00:28:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:08.825 00:28:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:08.825 00:28:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:08.825 00:28:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.825 00:28:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.825 00:28:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:08.825 00:28:02 -- bdev/nbd_common.sh@41 -- # break 00:14:08.825 00:28:02 -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.825 00:28:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.825 00:28:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:09.090 00:28:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:09.090 00:28:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:09.090 00:28:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:09.090 00:28:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.090 00:28:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.090 00:28:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:09.090 00:28:02 -- bdev/nbd_common.sh@41 -- # break 00:14:09.090 00:28:02 -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.090 00:28:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.090 00:28:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:09.347 00:28:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:09.347 00:28:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:09.347 00:28:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:09.347 00:28:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.347 00:28:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.347 00:28:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:09.347 00:28:02 -- bdev/nbd_common.sh@41 -- # break 00:14:09.347 00:28:02 -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.347 00:28:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.347 00:28:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:09.605 00:28:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:09.605 00:28:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:09.605 00:28:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:09.605 00:28:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.605 00:28:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.605 00:28:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:09.605 00:28:03 -- bdev/nbd_common.sh@41 -- # break 00:14:09.605 00:28:03 -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.605 00:28:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.605 00:28:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:09.883 00:28:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:09.883 00:28:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:09.883 00:28:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:09.883 00:28:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.883 00:28:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.883 00:28:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:09.883 00:28:03 -- bdev/nbd_common.sh@41 -- # break 00:14:09.883 00:28:03 -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.883 00:28:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.883 00:28:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:14:10.141 00:28:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:14:10.141 00:28:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:14:10.141 00:28:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:14:10.141 00:28:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.141 00:28:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.141 00:28:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:14:10.141 00:28:03 -- bdev/nbd_common.sh@41 -- # break 00:14:10.141 00:28:03 -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.141 00:28:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.141 00:28:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:14:10.400 00:28:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:14:10.400 00:28:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:14:10.400 00:28:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:14:10.400 00:28:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.400 00:28:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.400 00:28:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:14:10.400 00:28:04 -- bdev/nbd_common.sh@41 -- # break 00:14:10.400 00:28:04 -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.400 00:28:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.400 00:28:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:10.657 00:28:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:10.657 00:28:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:10.657 00:28:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:10.657 00:28:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.657 00:28:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.657 00:28:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:10.657 00:28:04 -- bdev/nbd_common.sh@41 -- # break 00:14:10.657 00:28:04 -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.657 00:28:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.657 00:28:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:11.222 00:28:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:11.222 00:28:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:11.222 00:28:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:11.222 00:28:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.222 00:28:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.222 00:28:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:11.222 00:28:04 -- bdev/nbd_common.sh@41 -- # break 00:14:11.222 00:28:04 -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.222 00:28:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.222 00:28:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:11.480 00:28:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:11.480 00:28:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:11.480 00:28:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:11.480 00:28:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.480 00:28:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.480 00:28:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:11.481 00:28:05 -- bdev/nbd_common.sh@41 -- # break 00:14:11.481 00:28:05 -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.481 00:28:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.481 00:28:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:11.739 00:28:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:11.739 00:28:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:11.739 00:28:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:11.739 00:28:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.739 00:28:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.739 00:28:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:11.739 00:28:05 -- bdev/nbd_common.sh@41 -- # break 00:14:11.739 00:28:05 -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.739 00:28:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.739 00:28:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:14:11.997 00:28:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:14:11.997 00:28:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:14:11.997 00:28:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:14:11.997 00:28:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.997 00:28:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.997 00:28:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:14:11.997 00:28:05 -- bdev/nbd_common.sh@41 -- # break 00:14:11.997 00:28:05 -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.997 00:28:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.997 00:28:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:14:12.256 00:28:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:14:12.256 00:28:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:14:12.256 00:28:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:14:12.256 00:28:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.256 00:28:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.256 00:28:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:14:12.256 00:28:05 -- bdev/nbd_common.sh@41 -- # break 00:14:12.256 00:28:05 -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.256 00:28:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.256 00:28:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:14:12.514 00:28:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:14:12.514 00:28:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:14:12.514 00:28:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:14:12.514 00:28:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.514 00:28:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.514 00:28:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:14:12.514 00:28:06 -- bdev/nbd_common.sh@41 -- # break 00:14:12.514 00:28:06 -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.514 00:28:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.514 00:28:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:14:12.773 00:28:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:14:12.774 00:28:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:14:12.774 00:28:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:14:12.774 00:28:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.774 00:28:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.774 00:28:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:14:12.774 00:28:06 -- bdev/nbd_common.sh@41 -- # break 00:14:12.774 00:28:06 -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.774 00:28:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:12.774 00:28:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:12.774 00:28:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:13.032 00:28:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:13.032 00:28:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:13.032 00:28:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:13.291 00:28:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:13.291 00:28:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:13.291 00:28:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:13.291 00:28:06 -- bdev/nbd_common.sh@65 -- # true 00:14:13.291 00:28:06 -- bdev/nbd_common.sh@65 -- # count=0 00:14:13.291 00:28:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:13.291 00:28:06 -- bdev/nbd_common.sh@104 -- # count=0 00:14:13.291 00:28:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:13.291 00:28:06 -- bdev/nbd_common.sh@109 -- # return 0 00:14:13.291 00:28:06 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:14:13.291 00:28:06 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:13.292 00:28:06 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:13.292 00:28:06 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:13.292 00:28:06 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:13.292 00:28:06 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:13.550 malloc_lvol_verify 00:14:13.550 00:28:07 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:13.808 176dc95f-1261-4e20-ad0a-2b9a5e3f827b 00:14:13.808 00:28:07 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:14.067 7e344559-7293-4839-abec-fe8717a88cd7 00:14:14.067 00:28:07 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:14.325 /dev/nbd0 00:14:14.325 00:28:08 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:14.325 mke2fs 1.46.5 (30-Dec-2021) 00:14:14.325 00:14:14.325 Filesystem too small for a journal 00:14:14.325 Discarding device blocks: 0/1024 done 00:14:14.325 Creating filesystem with 1024 4k blocks and 1024 inodes 00:14:14.325 00:14:14.325 Allocating group tables: 0/1 done 00:14:14.325 Writing inode tables: 0/1 done 00:14:14.325 Writing superblocks and filesystem accounting information: 0/1 done 00:14:14.325 00:14:14.325 00:28:08 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:14.325 00:28:08 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:14.325 00:28:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:14.325 00:28:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:14.325 00:28:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:14.325 00:28:08 -- bdev/nbd_common.sh@51 -- # local i 00:14:14.325 00:28:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.325 00:28:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:14.582 00:28:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:14.583 00:28:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:14.583 00:28:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:14.583 00:28:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.583 00:28:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.583 00:28:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:14.583 00:28:08 -- bdev/nbd_common.sh@41 -- # break 00:14:14.583 00:28:08 -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.583 00:28:08 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:14.583 00:28:08 -- bdev/nbd_common.sh@147 -- # return 0 00:14:14.583 00:28:08 -- bdev/blockdev.sh@326 -- # killprocess 116718 00:14:14.583 00:28:08 -- common/autotest_common.sh@936 -- # '[' -z 116718 ']' 00:14:14.583 00:28:08 -- common/autotest_common.sh@940 -- # kill -0 116718 00:14:14.583 00:28:08 -- common/autotest_common.sh@941 -- # uname 00:14:14.583 00:28:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:14.583 00:28:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116718 00:14:14.841 00:28:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:14.841 00:28:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:14.841 killing process with pid 116718 00:14:14.841 00:28:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116718' 00:14:14.841 00:28:08 -- common/autotest_common.sh@955 -- # kill 116718 00:14:14.841 00:28:08 -- common/autotest_common.sh@960 -- # wait 116718 00:14:17.368 00:28:11 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:14:17.368 00:14:17.368 real 0m30.605s 00:14:17.368 user 0m39.605s 00:14:17.368 sys 0m12.479s 00:14:17.368 00:28:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:17.368 00:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:17.368 ************************************ 00:14:17.368 END TEST bdev_nbd 00:14:17.368 ************************************ 00:14:17.625 00:28:11 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:14:17.625 00:28:11 -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:14:17.625 00:28:11 -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:14:17.625 00:28:11 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:14:17.625 00:28:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:17.625 00:28:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:17.625 00:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:17.626 ************************************ 00:14:17.626 START TEST bdev_fio 00:14:17.626 ************************************ 00:14:17.626 00:28:11 -- common/autotest_common.sh@1111 -- # fio_test_suite '' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@331 -- # local env_context 00:14:17.626 00:28:11 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:17.626 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:17.626 00:28:11 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:17.626 00:28:11 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:14:17.626 00:28:11 -- bdev/blockdev.sh@339 -- # echo '' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@339 -- # env_context= 00:14:17.626 00:28:11 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:17.626 00:28:11 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:17.626 00:28:11 -- common/autotest_common.sh@1267 -- # local workload=verify 00:14:17.626 00:28:11 -- common/autotest_common.sh@1268 -- # local bdev_type=AIO 00:14:17.626 00:28:11 -- common/autotest_common.sh@1269 -- # local env_context= 00:14:17.626 00:28:11 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:14:17.626 00:28:11 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:17.626 00:28:11 -- common/autotest_common.sh@1277 -- # '[' -z verify ']' 00:14:17.626 00:28:11 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:14:17.626 00:28:11 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:17.626 00:28:11 -- common/autotest_common.sh@1287 -- # cat 00:14:17.626 00:28:11 -- common/autotest_common.sh@1299 -- # '[' verify == verify ']' 00:14:17.626 00:28:11 -- common/autotest_common.sh@1300 -- # cat 00:14:17.626 00:28:11 -- common/autotest_common.sh@1309 -- # '[' AIO == AIO ']' 00:14:17.626 00:28:11 -- common/autotest_common.sh@1310 -- # /usr/src/fio/fio --version 00:14:17.626 00:28:11 -- common/autotest_common.sh@1310 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:17.626 00:28:11 -- common/autotest_common.sh@1311 -- # echo serialize_overlap=1 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:14:17.626 00:28:11 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:17.626 00:28:11 -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:14:17.626 00:28:11 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:17.626 00:28:11 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:17.626 00:28:11 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:14:17.626 00:28:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:17.626 00:28:11 -- common/autotest_common.sh@10 -- # set +x 00:14:17.885 ************************************ 00:14:17.885 START TEST bdev_fio_rw_verify 00:14:17.885 ************************************ 00:14:17.885 00:28:11 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:17.885 00:28:11 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:17.885 00:28:11 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:14:17.885 00:28:11 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:17.885 00:28:11 -- common/autotest_common.sh@1325 -- # local sanitizers 00:14:17.885 00:28:11 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:17.885 00:28:11 -- common/autotest_common.sh@1327 -- # shift 00:14:17.885 00:28:11 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:14:17.885 00:28:11 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:14:17.885 00:28:11 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:17.885 00:28:11 -- common/autotest_common.sh@1331 -- # grep libasan 00:14:17.885 00:28:11 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:14:17.885 00:28:11 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:14:17.885 00:28:11 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:14:17.885 00:28:11 -- common/autotest_common.sh@1333 -- # break 00:14:17.885 00:28:11 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:17.885 00:28:11 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:17.885 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:17.885 fio-3.35 00:14:17.885 Starting 16 threads 00:14:30.105 00:14:30.105 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=117950: Wed Apr 24 00:28:23 2024 00:14:30.105 read: IOPS=78.3k, BW=306MiB/s (321MB/s)(3064MiB/10014msec) 00:14:30.105 slat (usec): min=2, max=68026, avg=34.01, stdev=435.54 00:14:30.105 clat (usec): min=6, max=40542, avg=279.85, stdev=1261.76 00:14:30.105 lat (usec): min=22, max=68282, avg=313.86, stdev=1334.65 00:14:30.105 clat percentiles (usec): 00:14:30.105 | 50.000th=[ 169], 99.000th=[ 611], 99.900th=[16319], 99.990th=[28181], 00:14:30.105 | 99.999th=[40633] 00:14:30.105 write: IOPS=124k, BW=483MiB/s (507MB/s)(4791MiB/9910msec); 0 zone resets 00:14:30.105 slat (usec): min=5, max=59442, avg=65.87, stdev=697.15 00:14:30.105 clat (usec): min=9, max=59772, avg=373.35, stdev=1576.79 00:14:30.105 lat (usec): min=36, max=59809, avg=439.22, stdev=1724.02 00:14:30.105 clat percentiles (usec): 00:14:30.105 | 50.000th=[ 212], 99.000th=[ 4752], 99.900th=[20317], 99.990th=[36439], 00:14:30.105 | 99.999th=[53740] 00:14:30.105 bw ( KiB/s): min=294405, max=796560, per=99.24%, avg=491273.42, stdev=8700.58, samples=306 00:14:30.105 iops : min=73601, max=199140, avg=122818.24, stdev=2175.15, samples=306 00:14:30.105 lat (usec) : 10=0.01%, 20=0.01%, 50=0.60%, 100=12.42%, 250=58.16% 00:14:30.105 lat (usec) : 500=26.14%, 750=1.50%, 1000=0.10% 00:14:30.105 lat (msec) : 2=0.09%, 4=0.07%, 10=0.18%, 20=0.63%, 50=0.10% 00:14:30.105 lat (msec) : 100=0.01% 00:14:30.105 cpu : usr=56.05%, sys=2.14%, ctx=265016, majf=2, minf=84962 00:14:30.105 IO depths : 1=11.3%, 2=23.6%, 4=52.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:30.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.106 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.106 issued rwts: total=784470,1226495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.106 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:30.106 00:14:30.106 Run status group 0 (all jobs): 00:14:30.106 READ: bw=306MiB/s (321MB/s), 306MiB/s-306MiB/s (321MB/s-321MB/s), io=3064MiB (3213MB), run=10014-10014msec 00:14:30.106 WRITE: bw=483MiB/s (507MB/s), 483MiB/s-483MiB/s (507MB/s-507MB/s), io=4791MiB (5024MB), run=9910-9910msec 00:14:33.397 ----------------------------------------------------- 00:14:33.397 Suppressions used: 00:14:33.397 count bytes template 00:14:33.397 16 140 /usr/src/fio/parse.c 00:14:33.397 8489 814944 /usr/src/fio/iolog.c 00:14:33.397 1 904 libcrypto.so 00:14:33.397 ----------------------------------------------------- 00:14:33.397 00:14:33.397 00:14:33.397 real 0m15.650s 00:14:33.397 user 1m36.365s 00:14:33.397 sys 0m4.472s 00:14:33.397 00:28:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.397 00:28:27 -- common/autotest_common.sh@10 -- # set +x 00:14:33.397 ************************************ 00:14:33.397 END TEST bdev_fio_rw_verify 00:14:33.397 ************************************ 00:14:33.397 00:28:27 -- bdev/blockdev.sh@350 -- # rm -f 00:14:33.397 00:28:27 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:33.397 00:28:27 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:33.397 00:28:27 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:33.397 00:28:27 -- common/autotest_common.sh@1267 -- # local workload=trim 00:14:33.397 00:28:27 -- common/autotest_common.sh@1268 -- # local bdev_type= 00:14:33.397 00:28:27 -- common/autotest_common.sh@1269 -- # local env_context= 00:14:33.397 00:28:27 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:14:33.397 00:28:27 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:33.397 00:28:27 -- common/autotest_common.sh@1277 -- # '[' -z trim ']' 00:14:33.397 00:28:27 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:14:33.397 00:28:27 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:33.397 00:28:27 -- common/autotest_common.sh@1287 -- # cat 00:14:33.397 00:28:27 -- common/autotest_common.sh@1299 -- # '[' trim == verify ']' 00:14:33.397 00:28:27 -- common/autotest_common.sh@1314 -- # '[' trim == trim ']' 00:14:33.397 00:28:27 -- common/autotest_common.sh@1315 -- # echo rw=trimwrite 00:14:33.397 00:28:27 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:33.398 00:28:27 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "a5e51566-0212-4393-8e3c-306f8b9a05c3"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a5e51566-0212-4393-8e3c-306f8b9a05c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "1d18f256-753d-5d6d-bf17-39566615a922"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1d18f256-753d-5d6d-bf17-39566615a922",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "a3a7e419-114e-5aff-8cdf-c9cc54b4cc8f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "a3a7e419-114e-5aff-8cdf-c9cc54b4cc8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "31fe191e-86a1-5fe9-988f-c5935725b207"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "31fe191e-86a1-5fe9-988f-c5935725b207",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7a477e20-dd40-5722-8987-d4c52c49a0b5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7a477e20-dd40-5722-8987-d4c52c49a0b5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "2b402205-0bd3-5689-b56c-e72ed1b9971f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2b402205-0bd3-5689-b56c-e72ed1b9971f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "442c9ffa-4bd8-55da-95ac-bb40b85354aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "442c9ffa-4bd8-55da-95ac-bb40b85354aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "335d7e56-6992-5e91-920d-e1a86ea15bc6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "335d7e56-6992-5e91-920d-e1a86ea15bc6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "7ab84337-29d9-548d-a320-edfaa1f9b268"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7ab84337-29d9-548d-a320-edfaa1f9b268",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "fe87f3c2-531e-5f3c-82ec-3767313f1d27"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fe87f3c2-531e-5f3c-82ec-3767313f1d27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "2168058e-35b6-553f-8035-fdbbd63f6e06"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2168058e-35b6-553f-8035-fdbbd63f6e06",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "40c66c3c-f367-5630-a5b7-3b198dd3694c"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "40c66c3c-f367-5630-a5b7-3b198dd3694c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "691673d3-5879-4ea0-9b23-3d70558f8087"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "691673d3-5879-4ea0-9b23-3d70558f8087",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "691673d3-5879-4ea0-9b23-3d70558f8087",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "dea696ff-f5c5-4aea-be0b-23fb451f5383",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "df423bac-5bcd-401f-aa1f-29b5da62b006",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "67c4df71-e620-4d2e-b721-bea4ddada5ba"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "67c4df71-e620-4d2e-b721-bea4ddada5ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "67c4df71-e620-4d2e-b721-bea4ddada5ba",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "49180963-26e2-4b23-ab21-35b2bdf84d27",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "f4c1e0be-7f38-4f71-b591-f89975e12450",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "7d742d9e-1189-4998-b241-de5222bb39f3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "7d742d9e-1189-4998-b241-de5222bb39f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7d742d9e-1189-4998-b241-de5222bb39f3",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "918cd98a-7ccc-4f42-8f02-b0066c133702",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7a23faeb-86a1-4a91-be22-40dbb525b21e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "dd2a1a51-12b1-459d-8af2-67824642e999"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "dd2a1a51-12b1-459d-8af2-67824642e999",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:33.658 00:28:27 -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:14:33.658 Malloc1p0 00:14:33.658 Malloc1p1 00:14:33.658 Malloc2p0 00:14:33.658 Malloc2p1 00:14:33.658 Malloc2p2 00:14:33.658 Malloc2p3 00:14:33.658 Malloc2p4 00:14:33.658 Malloc2p5 00:14:33.658 Malloc2p6 00:14:33.658 Malloc2p7 00:14:33.658 TestPT 00:14:33.658 raid0 00:14:33.658 concat0 ]] 00:14:33.658 00:28:27 -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "a5e51566-0212-4393-8e3c-306f8b9a05c3"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a5e51566-0212-4393-8e3c-306f8b9a05c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "1d18f256-753d-5d6d-bf17-39566615a922"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "1d18f256-753d-5d6d-bf17-39566615a922",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "a3a7e419-114e-5aff-8cdf-c9cc54b4cc8f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "a3a7e419-114e-5aff-8cdf-c9cc54b4cc8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "31fe191e-86a1-5fe9-988f-c5935725b207"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "31fe191e-86a1-5fe9-988f-c5935725b207",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7a477e20-dd40-5722-8987-d4c52c49a0b5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7a477e20-dd40-5722-8987-d4c52c49a0b5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "2b402205-0bd3-5689-b56c-e72ed1b9971f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2b402205-0bd3-5689-b56c-e72ed1b9971f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "442c9ffa-4bd8-55da-95ac-bb40b85354aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "442c9ffa-4bd8-55da-95ac-bb40b85354aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "335d7e56-6992-5e91-920d-e1a86ea15bc6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "335d7e56-6992-5e91-920d-e1a86ea15bc6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "7ab84337-29d9-548d-a320-edfaa1f9b268"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7ab84337-29d9-548d-a320-edfaa1f9b268",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "fe87f3c2-531e-5f3c-82ec-3767313f1d27"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fe87f3c2-531e-5f3c-82ec-3767313f1d27",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "2168058e-35b6-553f-8035-fdbbd63f6e06"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2168058e-35b6-553f-8035-fdbbd63f6e06",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "40c66c3c-f367-5630-a5b7-3b198dd3694c"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "40c66c3c-f367-5630-a5b7-3b198dd3694c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "691673d3-5879-4ea0-9b23-3d70558f8087"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "691673d3-5879-4ea0-9b23-3d70558f8087",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "691673d3-5879-4ea0-9b23-3d70558f8087",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "dea696ff-f5c5-4aea-be0b-23fb451f5383",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "df423bac-5bcd-401f-aa1f-29b5da62b006",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "67c4df71-e620-4d2e-b721-bea4ddada5ba"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "67c4df71-e620-4d2e-b721-bea4ddada5ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "67c4df71-e620-4d2e-b721-bea4ddada5ba",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "49180963-26e2-4b23-ab21-35b2bdf84d27",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "f4c1e0be-7f38-4f71-b591-f89975e12450",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "7d742d9e-1189-4998-b241-de5222bb39f3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "7d742d9e-1189-4998-b241-de5222bb39f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "7d742d9e-1189-4998-b241-de5222bb39f3",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "918cd98a-7ccc-4f42-8f02-b0066c133702",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7a23faeb-86a1-4a91-be22-40dbb525b21e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "dd2a1a51-12b1-459d-8af2-67824642e999"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "dd2a1a51-12b1-459d-8af2-67824642e999",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:14:33.660 00:28:27 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:14:33.660 00:28:27 -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:14:33.660 00:28:27 -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:14:33.660 00:28:27 -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:33.660 00:28:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:14:33.660 00:28:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.660 00:28:27 -- common/autotest_common.sh@10 -- # set +x 00:14:33.660 ************************************ 00:14:33.660 START TEST bdev_fio_trim 00:14:33.660 ************************************ 00:14:33.660 00:28:27 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:33.660 00:28:27 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:33.660 00:28:27 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:14:33.660 00:28:27 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:33.660 00:28:27 -- common/autotest_common.sh@1325 -- # local sanitizers 00:14:33.660 00:28:27 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.660 00:28:27 -- common/autotest_common.sh@1327 -- # shift 00:14:33.660 00:28:27 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:14:33.660 00:28:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:14:33.660 00:28:27 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.660 00:28:27 -- common/autotest_common.sh@1331 -- # grep libasan 00:14:33.660 00:28:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:14:33.660 00:28:27 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:14:33.660 00:28:27 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:14:33.660 00:28:27 -- common/autotest_common.sh@1333 -- # break 00:14:33.660 00:28:27 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:33.660 00:28:27 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:33.923 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.923 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.923 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.923 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.924 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.924 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.924 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.924 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.924 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.924 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.924 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.924 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.924 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.924 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.924 fio-3.35 00:14:33.924 Starting 14 threads 00:14:46.120 00:14:46.120 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=118194: Wed Apr 24 00:28:39 2024 00:14:46.120 write: IOPS=160k, BW=625MiB/s (656MB/s)(6255MiB/10002msec); 0 zone resets 00:14:46.120 slat (usec): min=2, max=31857, avg=31.55, stdev=377.90 00:14:46.120 clat (usec): min=22, max=32044, avg=213.47, stdev=980.33 00:14:46.120 lat (usec): min=35, max=32061, avg=245.02, stdev=1050.48 00:14:46.120 clat percentiles (usec): 00:14:46.120 | 50.000th=[ 149], 99.000th=[ 273], 99.900th=[16188], 99.990th=[21103], 00:14:46.120 | 99.999th=[28181] 00:14:46.120 bw ( KiB/s): min=448896, max=902368, per=100.00%, avg=641383.79, stdev=11544.41, samples=266 00:14:46.120 iops : min=112224, max=225592, avg=160345.84, stdev=2886.09, samples=266 00:14:46.120 trim: IOPS=160k, BW=625MiB/s (656MB/s)(6255MiB/10002msec); 0 zone resets 00:14:46.120 slat (usec): min=4, max=28036, avg=21.88, stdev=311.99 00:14:46.120 clat (usec): min=5, max=32061, avg=242.09, stdev=1051.26 00:14:46.120 lat (usec): min=16, max=32079, avg=263.97, stdev=1096.62 00:14:46.120 clat percentiles (usec): 00:14:46.120 | 50.000th=[ 167], 99.000th=[ 297], 99.900th=[16188], 99.990th=[23725], 00:14:46.120 | 99.999th=[28181] 00:14:46.120 bw ( KiB/s): min=448896, max=902368, per=100.00%, avg=641383.79, stdev=11544.33, samples=266 00:14:46.120 iops : min=112224, max=225592, avg=160346.05, stdev=2886.07, samples=266 00:14:46.120 lat (usec) : 10=0.01%, 20=0.01%, 50=0.20%, 100=11.45%, 250=84.30% 00:14:46.120 lat (usec) : 500=3.54%, 750=0.05%, 1000=0.01% 00:14:46.120 lat (msec) : 2=0.01%, 4=0.01%, 10=0.03%, 20=0.40%, 50=0.02% 00:14:46.120 cpu : usr=68.87%, sys=0.51%, ctx=168124, majf=0, minf=930 00:14:46.120 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.120 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.120 issued rwts: total=0,1601401,1601404,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.120 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:46.120 00:14:46.120 Run status group 0 (all jobs): 00:14:46.120 WRITE: bw=625MiB/s (656MB/s), 625MiB/s-625MiB/s (656MB/s-656MB/s), io=6255MiB (6559MB), run=10002-10002msec 00:14:46.120 TRIM: bw=625MiB/s (656MB/s), 625MiB/s-625MiB/s (656MB/s-656MB/s), io=6255MiB (6559MB), run=10002-10002msec 00:14:48.719 ----------------------------------------------------- 00:14:48.719 Suppressions used: 00:14:48.719 count bytes template 00:14:48.719 14 129 /usr/src/fio/parse.c 00:14:48.719 1 904 libcrypto.so 00:14:48.719 ----------------------------------------------------- 00:14:48.719 00:14:48.719 00:14:48.719 real 0m14.780s 00:14:48.719 user 1m42.576s 00:14:48.719 sys 0m1.655s 00:14:48.719 00:28:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:48.719 00:28:42 -- common/autotest_common.sh@10 -- # set +x 00:14:48.719 ************************************ 00:14:48.719 END TEST bdev_fio_trim 00:14:48.719 ************************************ 00:14:48.719 00:28:42 -- bdev/blockdev.sh@368 -- # rm -f 00:14:48.719 00:28:42 -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:48.719 /home/vagrant/spdk_repo/spdk 00:14:48.719 00:28:42 -- bdev/blockdev.sh@370 -- # popd 00:14:48.719 00:28:42 -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:14:48.719 00:14:48.719 real 0m30.929s 00:14:48.719 user 3m19.212s 00:14:48.719 sys 0m6.341s 00:14:48.719 00:28:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:48.719 00:28:42 -- common/autotest_common.sh@10 -- # set +x 00:14:48.719 ************************************ 00:14:48.719 END TEST bdev_fio 00:14:48.719 ************************************ 00:14:48.719 00:28:42 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:48.720 00:28:42 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:48.720 00:28:42 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:14:48.720 00:28:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:48.720 00:28:42 -- common/autotest_common.sh@10 -- # set +x 00:14:48.720 ************************************ 00:14:48.720 START TEST bdev_verify 00:14:48.720 ************************************ 00:14:48.720 00:28:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:48.720 [2024-04-24 00:28:42.350531] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:14:48.720 [2024-04-24 00:28:42.350899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118396 ] 00:14:48.982 [2024-04-24 00:28:42.517721] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:48.982 [2024-04-24 00:28:42.732265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.983 [2024-04-24 00:28:42.732271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.550 [2024-04-24 00:28:43.177326] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:49.550 [2024-04-24 00:28:43.177436] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:49.550 [2024-04-24 00:28:43.185250] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:49.550 [2024-04-24 00:28:43.185308] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:49.550 [2024-04-24 00:28:43.193269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:49.550 [2024-04-24 00:28:43.193320] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:49.551 [2024-04-24 00:28:43.193353] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:49.809 [2024-04-24 00:28:43.407723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:49.809 [2024-04-24 00:28:43.407852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.809 [2024-04-24 00:28:43.407939] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:49.809 [2024-04-24 00:28:43.407970] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.809 [2024-04-24 00:28:43.410634] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.809 [2024-04-24 00:28:43.410685] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:50.068 Running I/O for 5 seconds... 00:14:56.660 00:14:56.660 Latency(us) 00:14:56.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.660 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.660 Verification LBA range: start 0x0 length 0x1000 00:14:56.660 Malloc0 : 5.11 1302.28 5.09 0.00 0.00 98114.91 604.65 365503.63 00:14:56.660 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.660 Verification LBA range: start 0x1000 length 0x1000 00:14:56.661 Malloc0 : 5.10 1280.71 5.00 0.00 0.00 99773.72 651.46 405449.39 00:14:56.661 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x800 00:14:56.661 Malloc1p0 : 5.11 675.93 2.64 0.00 0.00 188522.21 3198.78 206719.27 00:14:56.661 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x800 length 0x800 00:14:56.661 Malloc1p0 : 5.10 677.76 2.65 0.00 0.00 188022.49 3151.97 206719.27 00:14:56.661 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x800 00:14:56.661 Malloc1p1 : 5.11 675.67 2.64 0.00 0.00 188131.64 3401.63 201726.05 00:14:56.661 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x800 length 0x800 00:14:56.661 Malloc1p1 : 5.10 677.49 2.65 0.00 0.00 187617.09 3323.61 201726.05 00:14:56.661 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x200 00:14:56.661 Malloc2p0 : 5.12 675.43 2.64 0.00 0.00 187682.68 3198.78 196732.83 00:14:56.661 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x200 length 0x200 00:14:56.661 Malloc2p0 : 5.10 677.20 2.65 0.00 0.00 187186.32 3167.57 196732.83 00:14:56.661 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x200 00:14:56.661 Malloc2p1 : 5.12 675.17 2.64 0.00 0.00 187233.71 3214.38 192738.26 00:14:56.661 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x200 length 0x200 00:14:56.661 Malloc2p1 : 5.11 676.91 2.64 0.00 0.00 186759.45 3198.78 193736.90 00:14:56.661 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x200 00:14:56.661 Malloc2p2 : 5.12 674.93 2.64 0.00 0.00 186808.61 3058.35 189742.32 00:14:56.661 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x200 length 0x200 00:14:56.661 Malloc2p2 : 5.23 685.24 2.68 0.00 0.00 184054.78 2995.93 189742.32 00:14:56.661 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x200 00:14:56.661 Malloc2p3 : 5.24 684.41 2.67 0.00 0.00 183809.93 3011.54 185747.75 00:14:56.661 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x200 length 0x200 00:14:56.661 Malloc2p3 : 5.23 684.98 2.68 0.00 0.00 183672.51 2980.33 185747.75 00:14:56.661 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x200 00:14:56.661 Malloc2p4 : 5.24 683.95 2.67 0.00 0.00 183504.71 3027.14 183750.46 00:14:56.661 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x200 length 0x200 00:14:56.661 Malloc2p4 : 5.23 684.74 2.67 0.00 0.00 183309.52 3027.14 183750.46 00:14:56.661 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x200 00:14:56.661 Malloc2p5 : 5.24 683.48 2.67 0.00 0.00 183197.83 2933.52 178757.24 00:14:56.661 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x200 length 0x200 00:14:56.661 Malloc2p5 : 5.24 684.44 2.67 0.00 0.00 182940.89 2871.10 178757.24 00:14:56.661 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x200 00:14:56.661 Malloc2p6 : 5.25 683.04 2.67 0.00 0.00 182876.98 2917.91 175761.31 00:14:56.661 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x200 length 0x200 00:14:56.661 Malloc2p6 : 5.24 683.98 2.67 0.00 0.00 182623.62 2886.70 175761.31 00:14:56.661 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x200 00:14:56.661 Malloc2p7 : 5.25 682.67 2.67 0.00 0.00 182510.90 2777.48 171766.74 00:14:56.661 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x200 length 0x200 00:14:56.661 Malloc2p7 : 5.24 683.52 2.67 0.00 0.00 182294.30 2761.87 171766.74 00:14:56.661 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x1000 00:14:56.661 TestPT : 5.25 662.24 2.59 0.00 0.00 186847.95 15229.32 171766.74 00:14:56.661 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x1000 length 0x1000 00:14:56.661 TestPT : 5.25 658.71 2.57 0.00 0.00 188427.96 14293.09 245666.38 00:14:56.661 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x2000 00:14:56.661 raid0 : 5.26 682.01 2.66 0.00 0.00 181562.61 2902.31 153791.15 00:14:56.661 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x2000 length 0x2000 00:14:56.661 raid0 : 5.25 682.73 2.67 0.00 0.00 181419.13 2902.31 148797.93 00:14:56.661 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x2000 00:14:56.661 concat0 : 5.26 681.70 2.66 0.00 0.00 181224.55 3058.35 148797.93 00:14:56.661 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x2000 length 0x2000 00:14:56.661 concat0 : 5.25 682.37 2.67 0.00 0.00 181082.38 2995.93 145802.00 00:14:56.661 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x1000 00:14:56.661 raid1 : 5.26 681.44 2.66 0.00 0.00 180832.87 3651.29 142806.06 00:14:56.661 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x1000 length 0x1000 00:14:56.661 raid1 : 5.26 681.94 2.66 0.00 0.00 180738.19 3744.91 139810.13 00:14:56.661 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x0 length 0x4e2 00:14:56.661 AIO0 : 5.26 681.16 2.66 0.00 0.00 180192.35 1146.88 153791.15 00:14:56.661 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:56.661 Verification LBA range: start 0x4e2 length 0x4e2 00:14:56.661 AIO0 : 5.26 681.60 2.66 0.00 0.00 180079.51 604.65 148797.93 00:14:56.661 =================================================================================================================== 00:14:56.661 Total : 22969.85 89.73 0.00 0.00 174714.56 604.65 405449.39 00:14:58.576 00:14:58.576 real 0m9.678s 00:14:58.576 user 0m17.148s 00:14:58.576 sys 0m0.498s 00:14:58.576 00:28:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:58.576 00:28:51 -- common/autotest_common.sh@10 -- # set +x 00:14:58.576 ************************************ 00:14:58.576 END TEST bdev_verify 00:14:58.576 ************************************ 00:14:58.576 00:28:51 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:58.576 00:28:51 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:14:58.576 00:28:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.576 00:28:51 -- common/autotest_common.sh@10 -- # set +x 00:14:58.576 ************************************ 00:14:58.576 START TEST bdev_verify_big_io 00:14:58.576 ************************************ 00:14:58.576 00:28:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:58.576 [2024-04-24 00:28:52.129225] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:14:58.576 [2024-04-24 00:28:52.129455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118537 ] 00:14:58.576 [2024-04-24 00:28:52.316286] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:58.834 [2024-04-24 00:28:52.534651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.834 [2024-04-24 00:28:52.534652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.398 [2024-04-24 00:28:52.954139] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:59.398 [2024-04-24 00:28:52.954243] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:59.398 [2024-04-24 00:28:52.962102] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:59.398 [2024-04-24 00:28:52.962147] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:59.398 [2024-04-24 00:28:52.970125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:59.398 [2024-04-24 00:28:52.970166] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:59.398 [2024-04-24 00:28:52.970196] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:59.398 [2024-04-24 00:28:53.177426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:59.398 [2024-04-24 00:28:53.177548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.398 [2024-04-24 00:28:53.177590] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:59.398 [2024-04-24 00:28:53.177613] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.398 [2024-04-24 00:28:53.180262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.398 [2024-04-24 00:28:53.180317] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:59.964 [2024-04-24 00:28:53.588186] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.592093] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.596467] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.600593] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.604373] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.608570] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.612440] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.616486] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.620238] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.624528] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.628010] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.632047] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.635567] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.639701] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.643766] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.647193] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:59.964 [2024-04-24 00:28:53.738028] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:14:59.964 [2024-04-24 00:28:53.745276] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:15:00.223 Running I/O for 5 seconds... 00:15:06.781 00:15:06.781 Latency(us) 00:15:06.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.781 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x100 00:15:06.781 Malloc0 : 5.64 227.09 14.19 0.00 0.00 553096.25 616.35 1637775.85 00:15:06.781 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x100 length 0x100 00:15:06.781 Malloc0 : 5.85 197.02 12.31 0.00 0.00 638540.85 670.96 1925385.26 00:15:06.781 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x80 00:15:06.781 Malloc1p0 : 6.48 41.98 2.62 0.00 0.00 2755992.12 1279.51 4601750.67 00:15:06.781 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x80 length 0x80 00:15:06.781 Malloc1p0 : 6.10 109.57 6.85 0.00 0.00 1087602.65 2153.33 2268918.74 00:15:06.781 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x80 00:15:06.781 Malloc1p1 : 6.48 41.97 2.62 0.00 0.00 2678667.19 1279.51 4441967.66 00:15:06.781 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x80 length 0x80 00:15:06.781 Malloc1p1 : 6.43 42.33 2.65 0.00 0.00 2709846.35 1318.52 4697620.48 00:15:06.781 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x20 00:15:06.781 Malloc2p0 : 6.02 31.92 1.99 0.00 0.00 897655.48 639.76 1781580.56 00:15:06.781 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x20 length 0x20 00:15:06.781 Malloc2p0 : 6.01 29.27 1.83 0.00 0.00 982011.58 624.15 1661743.30 00:15:06.781 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x20 00:15:06.781 Malloc2p1 : 6.02 31.91 1.99 0.00 0.00 890598.33 639.76 1757613.10 00:15:06.781 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x20 length 0x20 00:15:06.781 Malloc2p1 : 6.01 29.27 1.83 0.00 0.00 974971.36 596.85 1645765.00 00:15:06.781 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x20 00:15:06.781 Malloc2p2 : 6.02 31.89 1.99 0.00 0.00 883422.91 604.65 1725656.50 00:15:06.781 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x20 length 0x20 00:15:06.781 Malloc2p2 : 6.01 29.26 1.83 0.00 0.00 967124.46 624.15 1629786.70 00:15:06.781 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x20 00:15:06.781 Malloc2p3 : 6.02 31.89 1.99 0.00 0.00 876283.97 612.45 1701689.05 00:15:06.781 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x20 length 0x20 00:15:06.781 Malloc2p3 : 6.02 29.25 1.83 0.00 0.00 959488.94 596.85 1605819.25 00:15:06.781 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x20 00:15:06.781 Malloc2p4 : 6.02 31.88 1.99 0.00 0.00 868862.08 659.26 1677721.60 00:15:06.781 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x20 length 0x20 00:15:06.781 Malloc2p4 : 6.02 29.24 1.83 0.00 0.00 952313.89 620.25 1581851.79 00:15:06.781 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x20 00:15:06.781 Malloc2p5 : 6.11 34.07 2.13 0.00 0.00 814365.06 667.06 1653754.15 00:15:06.781 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x20 length 0x20 00:15:06.781 Malloc2p5 : 6.10 31.48 1.97 0.00 0.00 885918.02 674.86 1557884.34 00:15:06.781 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x20 00:15:06.781 Malloc2p6 : 6.11 34.06 2.13 0.00 0.00 808137.75 612.45 1629786.70 00:15:06.781 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x20 length 0x20 00:15:06.781 Malloc2p6 : 6.10 31.48 1.97 0.00 0.00 878888.91 667.06 1533916.89 00:15:06.781 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x20 00:15:06.781 Malloc2p7 : 6.11 34.05 2.13 0.00 0.00 801655.72 807.50 1605819.25 00:15:06.781 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x20 length 0x20 00:15:06.781 Malloc2p7 : 6.10 31.47 1.97 0.00 0.00 871214.54 596.85 1509949.44 00:15:06.781 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x100 00:15:06.781 TestPT : 6.57 46.27 2.89 0.00 0.00 2261199.56 1256.11 4090445.04 00:15:06.781 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x100 length 0x100 00:15:06.781 TestPT : 6.48 40.15 2.51 0.00 0.00 2624606.85 93872.52 3563161.11 00:15:06.781 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x200 00:15:06.781 raid0 : 6.59 50.98 3.19 0.00 0.00 2030892.92 1388.74 3930662.03 00:15:06.781 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x200 length 0x200 00:15:06.781 raid0 : 6.37 47.70 2.98 0.00 0.00 2167239.60 1326.32 4218271.45 00:15:06.781 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x200 00:15:06.781 concat0 : 6.54 53.80 3.36 0.00 0.00 1870455.70 1396.54 3770879.02 00:15:06.781 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x200 length 0x200 00:15:06.781 concat0 : 6.43 59.72 3.73 0.00 0.00 1706183.83 1404.34 4026531.84 00:15:06.781 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:06.781 Verification LBA range: start 0x0 length 0x100 00:15:06.782 raid1 : 6.54 67.84 4.24 0.00 0.00 1450214.22 1739.82 3643052.62 00:15:06.782 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:06.782 Verification LBA range: start 0x100 length 0x100 00:15:06.782 raid1 : 6.48 61.73 3.86 0.00 0.00 1607900.61 1739.82 3882727.13 00:15:06.782 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:15:06.782 Verification LBA range: start 0x0 length 0x4e 00:15:06.782 AIO0 : 6.59 83.27 5.20 0.00 0.00 708413.81 1888.06 2316853.64 00:15:06.782 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:15:06.782 Verification LBA range: start 0x4e length 0x4e 00:15:06.782 AIO0 : 6.56 67.69 4.23 0.00 0.00 875338.79 1100.07 2396745.14 00:15:06.782 =================================================================================================================== 00:15:06.782 Total : 1741.49 108.84 0.00 0.00 1228370.53 596.85 4697620.48 00:15:08.152 [2024-04-24 00:29:01.594654] thread.c:2244:spdk_io_device_unregister: *WARNING*: io_device bdev_Malloc3 (0x616000009681) has 216 for_each calls outstanding 00:15:10.053 00:15:10.053 real 0m11.509s 00:15:10.053 user 0m21.263s 00:15:10.053 sys 0m0.428s 00:15:10.053 00:29:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:10.053 ************************************ 00:15:10.053 END TEST bdev_verify_big_io 00:15:10.053 ************************************ 00:15:10.053 00:29:03 -- common/autotest_common.sh@10 -- # set +x 00:15:10.053 00:29:03 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:10.053 00:29:03 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:15:10.053 00:29:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:10.053 00:29:03 -- common/autotest_common.sh@10 -- # set +x 00:15:10.053 ************************************ 00:15:10.053 START TEST bdev_write_zeroes 00:15:10.053 ************************************ 00:15:10.053 00:29:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:10.053 [2024-04-24 00:29:03.723925] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:15:10.053 [2024-04-24 00:29:03.724131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118698 ] 00:15:10.311 [2024-04-24 00:29:03.904981] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.572 [2024-04-24 00:29:04.182689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.829 [2024-04-24 00:29:04.610527] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:10.829 [2024-04-24 00:29:04.610636] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:15:10.829 [2024-04-24 00:29:04.618491] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:10.829 [2024-04-24 00:29:04.618550] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:15:11.087 [2024-04-24 00:29:04.626510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:11.087 [2024-04-24 00:29:04.626570] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:15:11.087 [2024-04-24 00:29:04.626600] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:15:11.345 [2024-04-24 00:29:04.879911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:11.345 [2024-04-24 00:29:04.880057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.345 [2024-04-24 00:29:04.880112] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:11.345 [2024-04-24 00:29:04.880149] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.345 [2024-04-24 00:29:04.883463] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.345 [2024-04-24 00:29:04.883541] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:15:11.602 Running I/O for 1 seconds... 00:15:12.974 00:15:12.974 Latency(us) 00:15:12.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.974 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 Malloc0 : 1.04 5417.44 21.16 0.00 0.00 23613.24 581.24 38697.45 00:15:12.974 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 Malloc1p0 : 1.04 5410.34 21.13 0.00 0.00 23608.26 787.99 37698.80 00:15:12.974 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 Malloc1p1 : 1.04 5403.48 21.11 0.00 0.00 23591.52 803.60 36949.82 00:15:12.974 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 Malloc2p0 : 1.04 5396.58 21.08 0.00 0.00 23571.27 803.60 36200.84 00:15:12.974 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 Malloc2p1 : 1.04 5389.60 21.05 0.00 0.00 23557.61 803.60 35701.52 00:15:12.974 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 Malloc2p2 : 1.05 5382.90 21.03 0.00 0.00 23543.08 760.69 35202.19 00:15:12.974 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 Malloc2p3 : 1.05 5375.95 21.00 0.00 0.00 23532.70 799.70 34702.87 00:15:12.974 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 Malloc2p4 : 1.05 5368.97 20.97 0.00 0.00 23517.36 764.59 34453.21 00:15:12.974 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 Malloc2p5 : 1.05 5362.25 20.95 0.00 0.00 23502.16 799.70 33704.23 00:15:12.974 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 Malloc2p6 : 1.05 5355.50 20.92 0.00 0.00 23482.80 768.49 33204.91 00:15:12.974 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 Malloc2p7 : 1.05 5348.50 20.89 0.00 0.00 23476.78 787.99 32955.25 00:15:12.974 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 TestPT : 1.05 5341.61 20.87 0.00 0.00 23456.53 799.70 32705.58 00:15:12.974 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 raid0 : 1.06 5333.58 20.83 0.00 0.00 23432.94 1466.76 31831.77 00:15:12.974 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 concat0 : 1.06 5325.96 20.80 0.00 0.00 23394.70 1435.55 30458.64 00:15:12.974 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 raid1 : 1.06 5316.45 20.77 0.00 0.00 23342.86 2324.97 28336.52 00:15:12.974 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:12.974 AIO0 : 1.06 5303.04 20.71 0.00 0.00 23297.68 1482.36 27712.37 00:15:12.974 =================================================================================================================== 00:15:12.974 Total : 85832.14 335.28 0.00 0.00 23495.11 581.24 38697.45 00:15:15.519 00:15:15.519 real 0m5.534s 00:15:15.519 user 0m4.926s 00:15:15.519 sys 0m0.424s 00:15:15.519 00:29:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:15.519 00:29:09 -- common/autotest_common.sh@10 -- # set +x 00:15:15.519 ************************************ 00:15:15.519 END TEST bdev_write_zeroes 00:15:15.519 ************************************ 00:15:15.519 00:29:09 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:15.519 00:29:09 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:15:15.519 00:29:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.519 00:29:09 -- common/autotest_common.sh@10 -- # set +x 00:15:15.519 ************************************ 00:15:15.519 START TEST bdev_json_nonenclosed 00:15:15.519 ************************************ 00:15:15.519 00:29:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:15.778 [2024-04-24 00:29:09.341266] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:15:15.778 [2024-04-24 00:29:09.341436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118795 ] 00:15:15.778 [2024-04-24 00:29:09.505391] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.041 [2024-04-24 00:29:09.734074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.041 [2024-04-24 00:29:09.734227] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:16.041 [2024-04-24 00:29:09.734266] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:16.041 [2024-04-24 00:29:09.734293] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:16.608 00:15:16.608 real 0m0.956s 00:15:16.608 user 0m0.713s 00:15:16.608 sys 0m0.144s 00:15:16.608 00:29:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:16.608 00:29:10 -- common/autotest_common.sh@10 -- # set +x 00:15:16.608 ************************************ 00:15:16.608 END TEST bdev_json_nonenclosed 00:15:16.608 ************************************ 00:15:16.608 00:29:10 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:16.608 00:29:10 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:15:16.608 00:29:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.608 00:29:10 -- common/autotest_common.sh@10 -- # set +x 00:15:16.608 ************************************ 00:15:16.608 START TEST bdev_json_nonarray 00:15:16.608 ************************************ 00:15:16.608 00:29:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:16.608 [2024-04-24 00:29:10.392726] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:15:16.608 [2024-04-24 00:29:10.392979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118837 ] 00:15:16.866 [2024-04-24 00:29:10.571028] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.124 [2024-04-24 00:29:10.856592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.124 [2024-04-24 00:29:10.856755] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:17.124 [2024-04-24 00:29:10.856817] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:17.124 [2024-04-24 00:29:10.856857] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:17.691 00:15:17.691 real 0m1.057s 00:15:17.691 user 0m0.796s 00:15:17.691 sys 0m0.161s 00:15:17.691 00:29:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:17.691 ************************************ 00:15:17.691 END TEST bdev_json_nonarray 00:15:17.691 ************************************ 00:15:17.691 00:29:11 -- common/autotest_common.sh@10 -- # set +x 00:15:17.691 00:29:11 -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:15:17.691 00:29:11 -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:15:17.691 00:29:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:17.691 00:29:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:17.691 00:29:11 -- common/autotest_common.sh@10 -- # set +x 00:15:17.691 ************************************ 00:15:17.691 START TEST bdev_qos 00:15:17.691 ************************************ 00:15:17.691 00:29:11 -- common/autotest_common.sh@1111 -- # qos_test_suite '' 00:15:17.691 00:29:11 -- bdev/blockdev.sh@446 -- # QOS_PID=118879 00:15:17.691 Process qos testing pid: 118879 00:15:17.691 00:29:11 -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 118879' 00:15:17.691 00:29:11 -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:15:17.691 00:29:11 -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:15:17.691 00:29:11 -- bdev/blockdev.sh@449 -- # waitforlisten 118879 00:15:17.691 00:29:11 -- common/autotest_common.sh@817 -- # '[' -z 118879 ']' 00:15:17.691 00:29:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.691 00:29:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.691 00:29:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.691 00:29:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.691 00:29:11 -- common/autotest_common.sh@10 -- # set +x 00:15:17.949 [2024-04-24 00:29:11.531851] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:15:17.949 [2024-04-24 00:29:11.532060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118879 ] 00:15:17.949 [2024-04-24 00:29:11.719016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.208 [2024-04-24 00:29:11.939705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.774 00:29:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:18.774 00:29:12 -- common/autotest_common.sh@850 -- # return 0 00:15:18.774 00:29:12 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:15:18.774 00:29:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:18.774 00:29:12 -- common/autotest_common.sh@10 -- # set +x 00:15:19.031 Malloc_0 00:15:19.031 00:29:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.031 00:29:12 -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:15:19.031 00:29:12 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_0 00:15:19.031 00:29:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:19.031 00:29:12 -- common/autotest_common.sh@887 -- # local i 00:15:19.031 00:29:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:19.031 00:29:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:19.031 00:29:12 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:19.031 00:29:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.031 00:29:12 -- common/autotest_common.sh@10 -- # set +x 00:15:19.031 00:29:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.031 00:29:12 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:15:19.031 00:29:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.031 00:29:12 -- common/autotest_common.sh@10 -- # set +x 00:15:19.031 [ 00:15:19.031 { 00:15:19.031 "name": "Malloc_0", 00:15:19.031 "aliases": [ 00:15:19.031 "6433e588-8830-40ab-a760-9e3f1b5610a1" 00:15:19.031 ], 00:15:19.031 "product_name": "Malloc disk", 00:15:19.031 "block_size": 512, 00:15:19.031 "num_blocks": 262144, 00:15:19.031 "uuid": "6433e588-8830-40ab-a760-9e3f1b5610a1", 00:15:19.031 "assigned_rate_limits": { 00:15:19.031 "rw_ios_per_sec": 0, 00:15:19.031 "rw_mbytes_per_sec": 0, 00:15:19.031 "r_mbytes_per_sec": 0, 00:15:19.031 "w_mbytes_per_sec": 0 00:15:19.031 }, 00:15:19.031 "claimed": false, 00:15:19.031 "zoned": false, 00:15:19.031 "supported_io_types": { 00:15:19.031 "read": true, 00:15:19.031 "write": true, 00:15:19.031 "unmap": true, 00:15:19.031 "write_zeroes": true, 00:15:19.031 "flush": true, 00:15:19.031 "reset": true, 00:15:19.031 "compare": false, 00:15:19.031 "compare_and_write": false, 00:15:19.031 "abort": true, 00:15:19.031 "nvme_admin": false, 00:15:19.031 "nvme_io": false 00:15:19.031 }, 00:15:19.031 "memory_domains": [ 00:15:19.031 { 00:15:19.031 "dma_device_id": "system", 00:15:19.031 "dma_device_type": 1 00:15:19.031 }, 00:15:19.031 { 00:15:19.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.031 "dma_device_type": 2 00:15:19.031 } 00:15:19.031 ], 00:15:19.031 "driver_specific": {} 00:15:19.031 } 00:15:19.031 ] 00:15:19.031 00:29:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.031 00:29:12 -- common/autotest_common.sh@893 -- # return 0 00:15:19.031 00:29:12 -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:15:19.031 00:29:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.031 00:29:12 -- common/autotest_common.sh@10 -- # set +x 00:15:19.031 Null_1 00:15:19.031 00:29:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.031 00:29:12 -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:15:19.031 00:29:12 -- common/autotest_common.sh@885 -- # local bdev_name=Null_1 00:15:19.031 00:29:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:19.031 00:29:12 -- common/autotest_common.sh@887 -- # local i 00:15:19.031 00:29:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:19.031 00:29:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:19.031 00:29:12 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:19.031 00:29:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.031 00:29:12 -- common/autotest_common.sh@10 -- # set +x 00:15:19.031 00:29:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.031 00:29:12 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:15:19.031 00:29:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:19.031 00:29:12 -- common/autotest_common.sh@10 -- # set +x 00:15:19.031 [ 00:15:19.031 { 00:15:19.031 "name": "Null_1", 00:15:19.031 "aliases": [ 00:15:19.031 "f664756c-65c1-4ae9-b33c-755247823628" 00:15:19.031 ], 00:15:19.031 "product_name": "Null disk", 00:15:19.031 "block_size": 512, 00:15:19.031 "num_blocks": 262144, 00:15:19.031 "uuid": "f664756c-65c1-4ae9-b33c-755247823628", 00:15:19.031 "assigned_rate_limits": { 00:15:19.031 "rw_ios_per_sec": 0, 00:15:19.031 "rw_mbytes_per_sec": 0, 00:15:19.031 "r_mbytes_per_sec": 0, 00:15:19.031 "w_mbytes_per_sec": 0 00:15:19.031 }, 00:15:19.031 "claimed": false, 00:15:19.031 "zoned": false, 00:15:19.031 "supported_io_types": { 00:15:19.031 "read": true, 00:15:19.031 "write": true, 00:15:19.031 "unmap": false, 00:15:19.031 "write_zeroes": true, 00:15:19.031 "flush": false, 00:15:19.031 "reset": true, 00:15:19.031 "compare": false, 00:15:19.031 "compare_and_write": false, 00:15:19.031 "abort": true, 00:15:19.031 "nvme_admin": false, 00:15:19.031 "nvme_io": false 00:15:19.031 }, 00:15:19.031 "driver_specific": {} 00:15:19.031 } 00:15:19.031 ] 00:15:19.031 00:29:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:19.031 00:29:12 -- common/autotest_common.sh@893 -- # return 0 00:15:19.031 00:29:12 -- bdev/blockdev.sh@457 -- # qos_function_test 00:15:19.031 00:29:12 -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:19.031 00:29:12 -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:15:19.031 00:29:12 -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:15:19.031 00:29:12 -- bdev/blockdev.sh@412 -- # local io_result=0 00:15:19.031 00:29:12 -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:15:19.031 00:29:12 -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:15:19.031 00:29:12 -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:15:19.031 00:29:12 -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:15:19.031 00:29:12 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:15:19.031 00:29:12 -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:19.031 00:29:12 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:19.031 00:29:12 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:15:19.031 00:29:12 -- bdev/blockdev.sh@378 -- # tail -1 00:15:19.031 Running I/O for 60 seconds... 00:15:24.292 00:29:17 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 70507.79 282031.16 0.00 0.00 286720.00 0.00 0.00 ' 00:15:24.292 00:29:17 -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:15:24.292 00:29:17 -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:15:24.292 00:29:17 -- bdev/blockdev.sh@380 -- # iostat_result=70507.79 00:15:24.292 00:29:17 -- bdev/blockdev.sh@385 -- # echo 70507 00:15:24.292 00:29:17 -- bdev/blockdev.sh@416 -- # io_result=70507 00:15:24.292 00:29:17 -- bdev/blockdev.sh@418 -- # iops_limit=17000 00:15:24.292 00:29:17 -- bdev/blockdev.sh@419 -- # '[' 17000 -gt 1000 ']' 00:15:24.292 00:29:17 -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 17000 Malloc_0 00:15:24.292 00:29:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.292 00:29:17 -- common/autotest_common.sh@10 -- # set +x 00:15:24.292 00:29:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.292 00:29:17 -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 17000 IOPS Malloc_0 00:15:24.292 00:29:17 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:24.292 00:29:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:24.292 00:29:17 -- common/autotest_common.sh@10 -- # set +x 00:15:24.292 ************************************ 00:15:24.292 START TEST bdev_qos_iops 00:15:24.292 ************************************ 00:15:24.292 00:29:17 -- common/autotest_common.sh@1111 -- # run_qos_test 17000 IOPS Malloc_0 00:15:24.292 00:29:17 -- bdev/blockdev.sh@389 -- # local qos_limit=17000 00:15:24.292 00:29:17 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:24.292 00:29:17 -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:15:24.292 00:29:17 -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:15:24.292 00:29:17 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:15:24.292 00:29:17 -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:24.292 00:29:17 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:15:24.292 00:29:17 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:24.292 00:29:17 -- bdev/blockdev.sh@378 -- # tail -1 00:15:29.550 00:29:23 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 16991.89 67967.56 0.00 0.00 69496.00 0.00 0.00 ' 00:15:29.550 00:29:23 -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:15:29.550 00:29:23 -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:15:29.550 00:29:23 -- bdev/blockdev.sh@380 -- # iostat_result=16991.89 00:15:29.550 00:29:23 -- bdev/blockdev.sh@385 -- # echo 16991 00:15:29.550 00:29:23 -- bdev/blockdev.sh@392 -- # qos_result=16991 00:15:29.550 00:29:23 -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:15:29.550 00:29:23 -- bdev/blockdev.sh@396 -- # lower_limit=15300 00:15:29.550 00:29:23 -- bdev/blockdev.sh@397 -- # upper_limit=18700 00:15:29.550 00:29:23 -- bdev/blockdev.sh@400 -- # '[' 16991 -lt 15300 ']' 00:15:29.550 00:29:23 -- bdev/blockdev.sh@400 -- # '[' 16991 -gt 18700 ']' 00:15:29.550 00:15:29.550 real 0m5.213s 00:15:29.550 user 0m0.103s 00:15:29.550 sys 0m0.020s 00:15:29.550 00:29:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:29.550 00:29:23 -- common/autotest_common.sh@10 -- # set +x 00:15:29.550 ************************************ 00:15:29.550 END TEST bdev_qos_iops 00:15:29.550 ************************************ 00:15:29.550 00:29:23 -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:15:29.550 00:29:23 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:29.550 00:29:23 -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:15:29.550 00:29:23 -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:29.550 00:29:23 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:29.550 00:29:23 -- bdev/blockdev.sh@378 -- # grep Null_1 00:15:29.550 00:29:23 -- bdev/blockdev.sh@378 -- # tail -1 00:15:34.829 00:29:28 -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 24163.78 96655.12 0.00 0.00 98304.00 0.00 0.00 ' 00:15:34.829 00:29:28 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:34.829 00:29:28 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:34.829 00:29:28 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:34.829 00:29:28 -- bdev/blockdev.sh@382 -- # iostat_result=98304.00 00:15:34.829 00:29:28 -- bdev/blockdev.sh@385 -- # echo 98304 00:15:34.829 00:29:28 -- bdev/blockdev.sh@427 -- # bw_limit=98304 00:15:34.829 00:29:28 -- bdev/blockdev.sh@428 -- # bw_limit=9 00:15:34.829 00:29:28 -- bdev/blockdev.sh@429 -- # '[' 9 -lt 2 ']' 00:15:34.829 00:29:28 -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 9 Null_1 00:15:34.829 00:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.829 00:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:34.829 00:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:34.829 00:29:28 -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 9 BANDWIDTH Null_1 00:15:34.829 00:29:28 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:34.829 00:29:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:34.829 00:29:28 -- common/autotest_common.sh@10 -- # set +x 00:15:34.829 ************************************ 00:15:34.829 START TEST bdev_qos_bw 00:15:34.829 ************************************ 00:15:34.829 00:29:28 -- common/autotest_common.sh@1111 -- # run_qos_test 9 BANDWIDTH Null_1 00:15:34.829 00:29:28 -- bdev/blockdev.sh@389 -- # local qos_limit=9 00:15:34.829 00:29:28 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:34.829 00:29:28 -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:15:34.830 00:29:28 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:34.830 00:29:28 -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:15:34.830 00:29:28 -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:34.830 00:29:28 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:34.830 00:29:28 -- bdev/blockdev.sh@378 -- # grep Null_1 00:15:34.830 00:29:28 -- bdev/blockdev.sh@378 -- # tail -1 00:15:40.090 00:29:33 -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 2302.58 9210.33 0.00 0.00 9388.00 0.00 0.00 ' 00:15:40.090 00:29:33 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:40.090 00:29:33 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:40.090 00:29:33 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:40.090 00:29:33 -- bdev/blockdev.sh@382 -- # iostat_result=9388.00 00:15:40.090 00:29:33 -- bdev/blockdev.sh@385 -- # echo 9388 00:15:40.090 00:29:33 -- bdev/blockdev.sh@392 -- # qos_result=9388 00:15:40.090 00:29:33 -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:40.090 00:29:33 -- bdev/blockdev.sh@394 -- # qos_limit=9216 00:15:40.090 00:29:33 -- bdev/blockdev.sh@396 -- # lower_limit=8294 00:15:40.090 00:29:33 -- bdev/blockdev.sh@397 -- # upper_limit=10137 00:15:40.090 00:29:33 -- bdev/blockdev.sh@400 -- # '[' 9388 -lt 8294 ']' 00:15:40.090 00:29:33 -- bdev/blockdev.sh@400 -- # '[' 9388 -gt 10137 ']' 00:15:40.090 00:15:40.090 real 0m5.266s 00:15:40.090 user 0m0.138s 00:15:40.090 sys 0m0.022s 00:15:40.090 00:29:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:40.090 00:29:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.090 ************************************ 00:15:40.090 END TEST bdev_qos_bw 00:15:40.090 ************************************ 00:15:40.090 00:29:33 -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:15:40.090 00:29:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.090 00:29:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.090 00:29:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.090 00:29:33 -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:15:40.090 00:29:33 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:40.090 00:29:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.090 00:29:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.348 ************************************ 00:15:40.348 START TEST bdev_qos_ro_bw 00:15:40.348 ************************************ 00:15:40.348 00:29:33 -- common/autotest_common.sh@1111 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:15:40.348 00:29:33 -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:15:40.348 00:29:33 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:15:40.348 00:29:33 -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:15:40.348 00:29:33 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:15:40.348 00:29:33 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:15:40.348 00:29:33 -- bdev/blockdev.sh@377 -- # local iostat_result 00:15:40.348 00:29:33 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:15:40.348 00:29:33 -- bdev/blockdev.sh@378 -- # tail -1 00:15:40.348 00:29:33 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:15:45.736 00:29:39 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.49 2045.94 0.00 0.00 2060.00 0.00 0.00 ' 00:15:45.736 00:29:39 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:15:45.736 00:29:39 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:45.736 00:29:39 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:15:45.736 00:29:39 -- bdev/blockdev.sh@382 -- # iostat_result=2060.00 00:15:45.736 00:29:39 -- bdev/blockdev.sh@385 -- # echo 2060 00:15:45.736 00:29:39 -- bdev/blockdev.sh@392 -- # qos_result=2060 00:15:45.736 00:29:39 -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:15:45.736 00:29:39 -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:15:45.736 00:29:39 -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:15:45.736 00:29:39 -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:15:45.736 00:29:39 -- bdev/blockdev.sh@400 -- # '[' 2060 -lt 1843 ']' 00:15:45.736 00:29:39 -- bdev/blockdev.sh@400 -- # '[' 2060 -gt 2252 ']' 00:15:45.736 00:15:45.736 real 0m5.157s 00:15:45.736 user 0m0.096s 00:15:45.736 sys 0m0.032s 00:15:45.736 00:29:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:45.736 ************************************ 00:15:45.736 END TEST bdev_qos_ro_bw 00:15:45.736 00:29:39 -- common/autotest_common.sh@10 -- # set +x 00:15:45.736 ************************************ 00:15:45.736 00:29:39 -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:15:45.736 00:29:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.736 00:29:39 -- common/autotest_common.sh@10 -- # set +x 00:15:46.022 00:29:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:46.022 00:29:39 -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:15:46.022 00:29:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:46.022 00:29:39 -- common/autotest_common.sh@10 -- # set +x 00:15:46.297 00:15:46.297 Latency(us) 00:15:46.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.297 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:46.297 Malloc_0 : 26.81 23537.64 91.94 0.00 0.00 10772.45 1927.07 505313.77 00:15:46.297 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:46.297 Null_1 : 27.07 23466.85 91.67 0.00 0.00 10880.66 643.66 259647.39 00:15:46.297 =================================================================================================================== 00:15:46.297 Total : 47004.50 183.61 0.00 0.00 10826.73 643.66 505313.77 00:15:46.297 0 00:15:46.297 00:29:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:46.297 00:29:39 -- bdev/blockdev.sh@461 -- # killprocess 118879 00:15:46.297 00:29:39 -- common/autotest_common.sh@936 -- # '[' -z 118879 ']' 00:15:46.297 00:29:39 -- common/autotest_common.sh@940 -- # kill -0 118879 00:15:46.297 00:29:39 -- common/autotest_common.sh@941 -- # uname 00:15:46.297 00:29:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:46.297 00:29:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118879 00:15:46.297 00:29:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:46.297 00:29:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:46.297 killing process with pid 118879 00:15:46.297 00:29:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 118879' 00:15:46.297 Received shutdown signal, test time was about 27.107685 seconds 00:15:46.297 00:15:46.297 Latency(us) 00:15:46.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.297 =================================================================================================================== 00:15:46.297 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.297 00:29:39 -- common/autotest_common.sh@955 -- # kill 118879 00:15:46.297 00:29:39 -- common/autotest_common.sh@960 -- # wait 118879 00:15:48.246 00:29:41 -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:15:48.246 00:15:48.246 real 0m30.193s 00:15:48.246 user 0m31.054s 00:15:48.246 sys 0m0.652s 00:15:48.246 00:29:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:48.246 ************************************ 00:15:48.246 END TEST bdev_qos 00:15:48.246 ************************************ 00:15:48.246 00:29:41 -- common/autotest_common.sh@10 -- # set +x 00:15:48.246 00:29:41 -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:15:48.246 00:29:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:48.246 00:29:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.246 00:29:41 -- common/autotest_common.sh@10 -- # set +x 00:15:48.246 ************************************ 00:15:48.246 START TEST bdev_qd_sampling 00:15:48.246 ************************************ 00:15:48.246 00:29:41 -- common/autotest_common.sh@1111 -- # qd_sampling_test_suite '' 00:15:48.246 00:29:41 -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:15:48.246 00:29:41 -- bdev/blockdev.sh@541 -- # QD_PID=119379 00:15:48.246 00:29:41 -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 119379' 00:15:48.246 Process bdev QD sampling period testing pid: 119379 00:15:48.246 00:29:41 -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:15:48.246 00:29:41 -- bdev/blockdev.sh@544 -- # waitforlisten 119379 00:15:48.246 00:29:41 -- common/autotest_common.sh@817 -- # '[' -z 119379 ']' 00:15:48.246 00:29:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.246 00:29:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:48.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.246 00:29:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.246 00:29:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:48.246 00:29:41 -- common/autotest_common.sh@10 -- # set +x 00:15:48.247 00:29:41 -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:15:48.247 [2024-04-24 00:29:41.801818] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:15:48.247 [2024-04-24 00:29:41.801970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119379 ] 00:15:48.247 [2024-04-24 00:29:41.971553] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:48.505 [2024-04-24 00:29:42.244368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.505 [2024-04-24 00:29:42.244380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.069 00:29:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:49.069 00:29:42 -- common/autotest_common.sh@850 -- # return 0 00:15:49.069 00:29:42 -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:15:49.069 00:29:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.069 00:29:42 -- common/autotest_common.sh@10 -- # set +x 00:15:49.326 Malloc_QD 00:15:49.326 00:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.326 00:29:43 -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:15:49.326 00:29:43 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_QD 00:15:49.326 00:29:43 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:49.326 00:29:43 -- common/autotest_common.sh@887 -- # local i 00:15:49.326 00:29:43 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:49.326 00:29:43 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:49.326 00:29:43 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:49.327 00:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.327 00:29:43 -- common/autotest_common.sh@10 -- # set +x 00:15:49.327 00:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.327 00:29:43 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:15:49.327 00:29:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.327 00:29:43 -- common/autotest_common.sh@10 -- # set +x 00:15:49.327 [ 00:15:49.327 { 00:15:49.327 "name": "Malloc_QD", 00:15:49.327 "aliases": [ 00:15:49.327 "f492c585-8ed7-4b4d-9061-eab77a3b9b4b" 00:15:49.327 ], 00:15:49.327 "product_name": "Malloc disk", 00:15:49.327 "block_size": 512, 00:15:49.327 "num_blocks": 262144, 00:15:49.327 "uuid": "f492c585-8ed7-4b4d-9061-eab77a3b9b4b", 00:15:49.327 "assigned_rate_limits": { 00:15:49.327 "rw_ios_per_sec": 0, 00:15:49.327 "rw_mbytes_per_sec": 0, 00:15:49.327 "r_mbytes_per_sec": 0, 00:15:49.327 "w_mbytes_per_sec": 0 00:15:49.327 }, 00:15:49.327 "claimed": false, 00:15:49.327 "zoned": false, 00:15:49.327 "supported_io_types": { 00:15:49.327 "read": true, 00:15:49.327 "write": true, 00:15:49.327 "unmap": true, 00:15:49.327 "write_zeroes": true, 00:15:49.327 "flush": true, 00:15:49.327 "reset": true, 00:15:49.327 "compare": false, 00:15:49.327 "compare_and_write": false, 00:15:49.327 "abort": true, 00:15:49.327 "nvme_admin": false, 00:15:49.327 "nvme_io": false 00:15:49.327 }, 00:15:49.327 "memory_domains": [ 00:15:49.327 { 00:15:49.327 "dma_device_id": "system", 00:15:49.327 "dma_device_type": 1 00:15:49.327 }, 00:15:49.327 { 00:15:49.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.327 "dma_device_type": 2 00:15:49.327 } 00:15:49.327 ], 00:15:49.327 "driver_specific": {} 00:15:49.327 } 00:15:49.327 ] 00:15:49.327 00:29:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.327 00:29:43 -- common/autotest_common.sh@893 -- # return 0 00:15:49.327 00:29:43 -- bdev/blockdev.sh@550 -- # sleep 2 00:15:49.327 00:29:43 -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:49.584 Running I/O for 5 seconds... 00:15:51.510 00:29:45 -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:15:51.510 00:29:45 -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:15:51.510 00:29:45 -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:15:51.510 00:29:45 -- bdev/blockdev.sh@521 -- # local iostats 00:15:51.510 00:29:45 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:15:51.510 00:29:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.510 00:29:45 -- common/autotest_common.sh@10 -- # set +x 00:15:51.510 00:29:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.510 00:29:45 -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:15:51.510 00:29:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.511 00:29:45 -- common/autotest_common.sh@10 -- # set +x 00:15:51.511 00:29:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.511 00:29:45 -- bdev/blockdev.sh@525 -- # iostats='{ 00:15:51.511 "tick_rate": 2100000000, 00:15:51.511 "ticks": 1837768487212, 00:15:51.511 "bdevs": [ 00:15:51.511 { 00:15:51.511 "name": "Malloc_QD", 00:15:51.511 "bytes_read": 755012096, 00:15:51.511 "num_read_ops": 184323, 00:15:51.511 "bytes_written": 0, 00:15:51.511 "num_write_ops": 0, 00:15:51.511 "bytes_unmapped": 0, 00:15:51.511 "num_unmap_ops": 0, 00:15:51.511 "bytes_copied": 0, 00:15:51.511 "num_copy_ops": 0, 00:15:51.511 "read_latency_ticks": 2028836131864, 00:15:51.511 "max_read_latency_ticks": 14519010, 00:15:51.511 "min_read_latency_ticks": 311922, 00:15:51.511 "write_latency_ticks": 0, 00:15:51.511 "max_write_latency_ticks": 0, 00:15:51.511 "min_write_latency_ticks": 0, 00:15:51.511 "unmap_latency_ticks": 0, 00:15:51.511 "max_unmap_latency_ticks": 0, 00:15:51.511 "min_unmap_latency_ticks": 0, 00:15:51.511 "copy_latency_ticks": 0, 00:15:51.511 "max_copy_latency_ticks": 0, 00:15:51.511 "min_copy_latency_ticks": 0, 00:15:51.511 "io_error": {}, 00:15:51.511 "queue_depth_polling_period": 10, 00:15:51.511 "queue_depth": 512, 00:15:51.511 "io_time": 20, 00:15:51.511 "weighted_io_time": 10240 00:15:51.511 } 00:15:51.511 ] 00:15:51.511 }' 00:15:51.511 00:29:45 -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:15:51.511 00:29:45 -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:15:51.511 00:29:45 -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:15:51.511 00:29:45 -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:15:51.511 00:29:45 -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:15:51.511 00:29:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.511 00:29:45 -- common/autotest_common.sh@10 -- # set +x 00:15:51.511 00:15:51.511 Latency(us) 00:15:51.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.511 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:15:51.511 Malloc_QD : 1.96 43804.39 171.11 0.00 0.00 5829.72 1739.82 7895.53 00:15:51.511 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:51.511 Malloc_QD : 1.96 53572.25 209.27 0.00 0.00 4766.47 1217.10 6772.05 00:15:51.511 =================================================================================================================== 00:15:51.511 Total : 97376.63 380.38 0.00 0.00 5244.72 1217.10 7895.53 00:15:51.511 0 00:15:51.511 00:29:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.511 00:29:45 -- bdev/blockdev.sh@554 -- # killprocess 119379 00:15:51.511 00:29:45 -- common/autotest_common.sh@936 -- # '[' -z 119379 ']' 00:15:51.511 00:29:45 -- common/autotest_common.sh@940 -- # kill -0 119379 00:15:51.511 00:29:45 -- common/autotest_common.sh@941 -- # uname 00:15:51.511 00:29:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:51.511 00:29:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119379 00:15:51.769 00:29:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:51.769 00:29:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:51.769 killing process with pid 119379 00:15:51.769 00:29:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119379' 00:15:51.769 00:29:45 -- common/autotest_common.sh@955 -- # kill 119379 00:15:51.769 Received shutdown signal, test time was about 2.132851 seconds 00:15:51.769 00:15:51.769 Latency(us) 00:15:51.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.769 =================================================================================================================== 00:15:51.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:51.769 00:29:45 -- common/autotest_common.sh@960 -- # wait 119379 00:15:53.752 00:29:47 -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:15:53.752 00:15:53.752 real 0m5.310s 00:15:53.752 user 0m9.877s 00:15:53.752 sys 0m0.417s 00:15:53.752 00:29:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:53.752 00:29:47 -- common/autotest_common.sh@10 -- # set +x 00:15:53.752 ************************************ 00:15:53.752 END TEST bdev_qd_sampling 00:15:53.752 ************************************ 00:15:53.752 00:29:47 -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:15:53.752 00:29:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:53.752 00:29:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:53.752 00:29:47 -- common/autotest_common.sh@10 -- # set +x 00:15:53.752 ************************************ 00:15:53.752 START TEST bdev_error 00:15:53.752 ************************************ 00:15:53.752 00:29:47 -- common/autotest_common.sh@1111 -- # error_test_suite '' 00:15:53.752 00:29:47 -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:15:53.752 00:29:47 -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:15:53.752 00:29:47 -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:15:53.752 00:29:47 -- bdev/blockdev.sh@472 -- # ERR_PID=119484 00:15:53.752 00:29:47 -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:15:53.752 00:29:47 -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 119484' 00:15:53.752 Process error testing pid: 119484 00:15:53.752 00:29:47 -- bdev/blockdev.sh@474 -- # waitforlisten 119484 00:15:53.752 00:29:47 -- common/autotest_common.sh@817 -- # '[' -z 119484 ']' 00:15:53.752 00:29:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.752 00:29:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:53.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.752 00:29:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.752 00:29:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:53.752 00:29:47 -- common/autotest_common.sh@10 -- # set +x 00:15:53.752 [2024-04-24 00:29:47.203826] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:15:53.752 [2024-04-24 00:29:47.204034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119484 ] 00:15:53.752 [2024-04-24 00:29:47.382734] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.040 [2024-04-24 00:29:47.620812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.618 00:29:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:54.618 00:29:48 -- common/autotest_common.sh@850 -- # return 0 00:15:54.618 00:29:48 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:15:54.618 00:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.618 00:29:48 -- common/autotest_common.sh@10 -- # set +x 00:15:54.618 Dev_1 00:15:54.618 00:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.618 00:29:48 -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:15:54.618 00:29:48 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:15:54.618 00:29:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:54.618 00:29:48 -- common/autotest_common.sh@887 -- # local i 00:15:54.618 00:29:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:54.618 00:29:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:54.618 00:29:48 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:54.618 00:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.618 00:29:48 -- common/autotest_common.sh@10 -- # set +x 00:15:54.618 00:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.618 00:29:48 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:54.618 00:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.618 00:29:48 -- common/autotest_common.sh@10 -- # set +x 00:15:54.618 [ 00:15:54.618 { 00:15:54.618 "name": "Dev_1", 00:15:54.618 "aliases": [ 00:15:54.618 "01ee8637-9fe3-4170-8aac-104f4153fea1" 00:15:54.618 ], 00:15:54.618 "product_name": "Malloc disk", 00:15:54.618 "block_size": 512, 00:15:54.618 "num_blocks": 262144, 00:15:54.618 "uuid": "01ee8637-9fe3-4170-8aac-104f4153fea1", 00:15:54.618 "assigned_rate_limits": { 00:15:54.618 "rw_ios_per_sec": 0, 00:15:54.618 "rw_mbytes_per_sec": 0, 00:15:54.618 "r_mbytes_per_sec": 0, 00:15:54.618 "w_mbytes_per_sec": 0 00:15:54.618 }, 00:15:54.618 "claimed": false, 00:15:54.618 "zoned": false, 00:15:54.618 "supported_io_types": { 00:15:54.618 "read": true, 00:15:54.618 "write": true, 00:15:54.618 "unmap": true, 00:15:54.618 "write_zeroes": true, 00:15:54.618 "flush": true, 00:15:54.618 "reset": true, 00:15:54.618 "compare": false, 00:15:54.618 "compare_and_write": false, 00:15:54.618 "abort": true, 00:15:54.618 "nvme_admin": false, 00:15:54.618 "nvme_io": false 00:15:54.618 }, 00:15:54.618 "memory_domains": [ 00:15:54.618 { 00:15:54.618 "dma_device_id": "system", 00:15:54.618 "dma_device_type": 1 00:15:54.618 }, 00:15:54.618 { 00:15:54.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.618 "dma_device_type": 2 00:15:54.618 } 00:15:54.618 ], 00:15:54.618 "driver_specific": {} 00:15:54.618 } 00:15:54.618 ] 00:15:54.618 00:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.618 00:29:48 -- common/autotest_common.sh@893 -- # return 0 00:15:54.618 00:29:48 -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:15:54.618 00:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.618 00:29:48 -- common/autotest_common.sh@10 -- # set +x 00:15:54.618 true 00:15:54.618 00:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.618 00:29:48 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:54.618 00:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.618 00:29:48 -- common/autotest_common.sh@10 -- # set +x 00:15:54.897 Dev_2 00:15:54.897 00:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.897 00:29:48 -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:15:54.897 00:29:48 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:15:54.897 00:29:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:54.897 00:29:48 -- common/autotest_common.sh@887 -- # local i 00:15:54.897 00:29:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:54.897 00:29:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:54.897 00:29:48 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:54.897 00:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.897 00:29:48 -- common/autotest_common.sh@10 -- # set +x 00:15:54.897 00:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.897 00:29:48 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:54.897 00:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.897 00:29:48 -- common/autotest_common.sh@10 -- # set +x 00:15:54.897 [ 00:15:54.897 { 00:15:54.897 "name": "Dev_2", 00:15:54.897 "aliases": [ 00:15:54.897 "dc89096b-eadc-4668-af49-adcd6cc7c529" 00:15:54.897 ], 00:15:54.897 "product_name": "Malloc disk", 00:15:54.897 "block_size": 512, 00:15:54.897 "num_blocks": 262144, 00:15:54.897 "uuid": "dc89096b-eadc-4668-af49-adcd6cc7c529", 00:15:54.897 "assigned_rate_limits": { 00:15:54.897 "rw_ios_per_sec": 0, 00:15:54.897 "rw_mbytes_per_sec": 0, 00:15:54.897 "r_mbytes_per_sec": 0, 00:15:54.897 "w_mbytes_per_sec": 0 00:15:54.897 }, 00:15:54.897 "claimed": false, 00:15:54.897 "zoned": false, 00:15:54.897 "supported_io_types": { 00:15:54.897 "read": true, 00:15:54.897 "write": true, 00:15:54.897 "unmap": true, 00:15:54.897 "write_zeroes": true, 00:15:54.897 "flush": true, 00:15:54.897 "reset": true, 00:15:54.897 "compare": false, 00:15:54.897 "compare_and_write": false, 00:15:54.897 "abort": true, 00:15:54.897 "nvme_admin": false, 00:15:54.898 "nvme_io": false 00:15:54.898 }, 00:15:54.898 "memory_domains": [ 00:15:54.898 { 00:15:54.898 "dma_device_id": "system", 00:15:54.898 "dma_device_type": 1 00:15:54.898 }, 00:15:54.898 { 00:15:54.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.898 "dma_device_type": 2 00:15:54.898 } 00:15:54.898 ], 00:15:54.898 "driver_specific": {} 00:15:54.898 } 00:15:54.898 ] 00:15:54.898 00:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.898 00:29:48 -- common/autotest_common.sh@893 -- # return 0 00:15:54.898 00:29:48 -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:54.898 00:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.898 00:29:48 -- common/autotest_common.sh@10 -- # set +x 00:15:54.898 00:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.898 00:29:48 -- bdev/blockdev.sh@484 -- # sleep 1 00:15:54.898 00:29:48 -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:54.898 Running I/O for 5 seconds... 00:15:55.836 00:29:49 -- bdev/blockdev.sh@487 -- # kill -0 119484 00:15:55.836 Process is existed as continue on error is set. Pid: 119484 00:15:55.836 00:29:49 -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 119484' 00:15:55.836 00:29:49 -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:15:55.836 00:29:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.836 00:29:49 -- common/autotest_common.sh@10 -- # set +x 00:15:55.836 00:29:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.836 00:29:49 -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:15:55.836 00:29:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.836 00:29:49 -- common/autotest_common.sh@10 -- # set +x 00:15:55.836 Timeout while waiting for response: 00:15:55.836 00:15:55.836 00:15:56.399 00:29:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.399 00:29:50 -- bdev/blockdev.sh@497 -- # sleep 5 00:16:00.582 00:16:00.582 Latency(us) 00:16:00.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.582 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:00.582 EE_Dev_1 : 0.91 35360.01 138.13 5.47 0.00 449.15 131.66 1185.89 00:16:00.582 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:00.582 Dev_2 : 5.00 76120.62 297.35 0.00 0.00 207.30 59.98 461373.44 00:16:00.582 =================================================================================================================== 00:16:00.582 Total : 111480.63 435.47 5.47 0.00 226.22 59.98 461373.44 00:16:01.516 00:29:55 -- bdev/blockdev.sh@499 -- # killprocess 119484 00:16:01.516 00:29:55 -- common/autotest_common.sh@936 -- # '[' -z 119484 ']' 00:16:01.516 00:29:55 -- common/autotest_common.sh@940 -- # kill -0 119484 00:16:01.516 00:29:55 -- common/autotest_common.sh@941 -- # uname 00:16:01.516 00:29:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:01.516 00:29:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119484 00:16:01.516 00:29:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:01.516 00:29:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:01.516 killing process with pid 119484 00:16:01.516 00:29:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119484' 00:16:01.516 00:29:55 -- common/autotest_common.sh@955 -- # kill 119484 00:16:01.516 Received shutdown signal, test time was about 5.000000 seconds 00:16:01.516 00:16:01.516 Latency(us) 00:16:01.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.516 =================================================================================================================== 00:16:01.516 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.516 00:29:55 -- common/autotest_common.sh@960 -- # wait 119484 00:16:03.412 00:29:56 -- bdev/blockdev.sh@503 -- # ERR_PID=119613 00:16:03.412 00:29:56 -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:16:03.412 Process error testing pid: 119613 00:16:03.412 00:29:56 -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 119613' 00:16:03.412 00:29:56 -- bdev/blockdev.sh@505 -- # waitforlisten 119613 00:16:03.412 00:29:56 -- common/autotest_common.sh@817 -- # '[' -z 119613 ']' 00:16:03.412 00:29:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.412 00:29:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:03.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.412 00:29:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.412 00:29:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:03.412 00:29:56 -- common/autotest_common.sh@10 -- # set +x 00:16:03.412 [2024-04-24 00:29:57.052276] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:16:03.412 [2024-04-24 00:29:57.053221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119613 ] 00:16:03.669 [2024-04-24 00:29:57.240904] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.927 [2024-04-24 00:29:57.524431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.492 00:29:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:04.492 00:29:58 -- common/autotest_common.sh@850 -- # return 0 00:16:04.492 00:29:58 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:16:04.492 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.492 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:04.492 Dev_1 00:16:04.492 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.492 00:29:58 -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:16:04.492 00:29:58 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:16:04.492 00:29:58 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:04.492 00:29:58 -- common/autotest_common.sh@887 -- # local i 00:16:04.492 00:29:58 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:04.492 00:29:58 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:04.492 00:29:58 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:16:04.492 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.492 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:04.492 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.492 00:29:58 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:16:04.492 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.492 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:04.492 [ 00:16:04.492 { 00:16:04.492 "name": "Dev_1", 00:16:04.492 "aliases": [ 00:16:04.492 "3aa25e98-8e9b-41fa-9e87-ef4675dbcc94" 00:16:04.492 ], 00:16:04.492 "product_name": "Malloc disk", 00:16:04.492 "block_size": 512, 00:16:04.492 "num_blocks": 262144, 00:16:04.492 "uuid": "3aa25e98-8e9b-41fa-9e87-ef4675dbcc94", 00:16:04.492 "assigned_rate_limits": { 00:16:04.492 "rw_ios_per_sec": 0, 00:16:04.492 "rw_mbytes_per_sec": 0, 00:16:04.492 "r_mbytes_per_sec": 0, 00:16:04.492 "w_mbytes_per_sec": 0 00:16:04.492 }, 00:16:04.492 "claimed": false, 00:16:04.492 "zoned": false, 00:16:04.492 "supported_io_types": { 00:16:04.492 "read": true, 00:16:04.492 "write": true, 00:16:04.492 "unmap": true, 00:16:04.492 "write_zeroes": true, 00:16:04.492 "flush": true, 00:16:04.492 "reset": true, 00:16:04.492 "compare": false, 00:16:04.492 "compare_and_write": false, 00:16:04.492 "abort": true, 00:16:04.492 "nvme_admin": false, 00:16:04.492 "nvme_io": false 00:16:04.492 }, 00:16:04.492 "memory_domains": [ 00:16:04.492 { 00:16:04.492 "dma_device_id": "system", 00:16:04.492 "dma_device_type": 1 00:16:04.492 }, 00:16:04.492 { 00:16:04.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.492 "dma_device_type": 2 00:16:04.492 } 00:16:04.492 ], 00:16:04.492 "driver_specific": {} 00:16:04.492 } 00:16:04.492 ] 00:16:04.492 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.492 00:29:58 -- common/autotest_common.sh@893 -- # return 0 00:16:04.492 00:29:58 -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:16:04.750 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.750 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:04.750 true 00:16:04.750 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.750 00:29:58 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:16:04.750 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.750 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:04.750 Dev_2 00:16:04.750 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.750 00:29:58 -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:16:04.750 00:29:58 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:16:04.750 00:29:58 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:04.750 00:29:58 -- common/autotest_common.sh@887 -- # local i 00:16:04.750 00:29:58 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:04.750 00:29:58 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:04.750 00:29:58 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:16:04.750 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.750 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:04.750 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.750 00:29:58 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:16:04.750 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.750 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:04.750 [ 00:16:04.750 { 00:16:04.750 "name": "Dev_2", 00:16:04.750 "aliases": [ 00:16:04.750 "701d745f-4b75-42a2-ae5d-a9d74d993f1e" 00:16:04.750 ], 00:16:04.750 "product_name": "Malloc disk", 00:16:04.750 "block_size": 512, 00:16:04.750 "num_blocks": 262144, 00:16:04.750 "uuid": "701d745f-4b75-42a2-ae5d-a9d74d993f1e", 00:16:04.750 "assigned_rate_limits": { 00:16:04.750 "rw_ios_per_sec": 0, 00:16:04.750 "rw_mbytes_per_sec": 0, 00:16:04.750 "r_mbytes_per_sec": 0, 00:16:04.750 "w_mbytes_per_sec": 0 00:16:04.750 }, 00:16:04.750 "claimed": false, 00:16:04.750 "zoned": false, 00:16:04.750 "supported_io_types": { 00:16:04.750 "read": true, 00:16:04.750 "write": true, 00:16:04.750 "unmap": true, 00:16:04.750 "write_zeroes": true, 00:16:04.750 "flush": true, 00:16:04.750 "reset": true, 00:16:04.750 "compare": false, 00:16:04.750 "compare_and_write": false, 00:16:04.750 "abort": true, 00:16:04.750 "nvme_admin": false, 00:16:04.750 "nvme_io": false 00:16:04.750 }, 00:16:04.750 "memory_domains": [ 00:16:04.750 { 00:16:04.750 "dma_device_id": "system", 00:16:04.750 "dma_device_type": 1 00:16:04.750 }, 00:16:04.750 { 00:16:04.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.750 "dma_device_type": 2 00:16:04.750 } 00:16:04.750 ], 00:16:04.750 "driver_specific": {} 00:16:04.750 } 00:16:04.750 ] 00:16:04.750 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.750 00:29:58 -- common/autotest_common.sh@893 -- # return 0 00:16:04.750 00:29:58 -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:16:04.750 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.750 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:16:04.750 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.750 00:29:58 -- bdev/blockdev.sh@515 -- # NOT wait 119613 00:16:04.750 00:29:58 -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:16:04.750 00:29:58 -- common/autotest_common.sh@638 -- # local es=0 00:16:04.750 00:29:58 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 119613 00:16:04.750 00:29:58 -- common/autotest_common.sh@626 -- # local arg=wait 00:16:04.750 00:29:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:04.750 00:29:58 -- common/autotest_common.sh@630 -- # type -t wait 00:16:04.750 00:29:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:04.750 00:29:58 -- common/autotest_common.sh@641 -- # wait 119613 00:16:05.008 Running I/O for 5 seconds... 00:16:05.008 task offset: 222024 on job bdev=EE_Dev_1 fails 00:16:05.008 00:16:05.008 Latency(us) 00:16:05.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.008 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:05.008 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:16:05.008 EE_Dev_1 : 0.00 22703.82 88.69 5159.96 0.00 473.73 150.19 842.61 00:16:05.008 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:16:05.008 Dev_2 : 0.00 15108.59 59.02 0.00 0.00 769.22 191.15 1404.34 00:16:05.008 =================================================================================================================== 00:16:05.008 Total : 37812.41 147.70 5159.96 0.00 634.00 150.19 1404.34 00:16:05.008 [2024-04-24 00:29:58.660269] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:05.008 request: 00:16:05.008 { 00:16:05.008 "method": "perform_tests", 00:16:05.008 "req_id": 1 00:16:05.008 } 00:16:05.008 Got JSON-RPC error response 00:16:05.008 response: 00:16:05.008 { 00:16:05.008 "code": -32603, 00:16:05.008 "message": "bdevperf failed with error Operation not permitted" 00:16:05.008 } 00:16:07.532 00:30:01 -- common/autotest_common.sh@641 -- # es=255 00:16:07.532 00:30:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:07.532 00:30:01 -- common/autotest_common.sh@650 -- # es=127 00:16:07.532 00:30:01 -- common/autotest_common.sh@651 -- # case "$es" in 00:16:07.532 00:30:01 -- common/autotest_common.sh@658 -- # es=1 00:16:07.532 00:30:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:07.532 00:16:07.532 real 0m14.081s 00:16:07.532 user 0m14.017s 00:16:07.532 sys 0m1.129s 00:16:07.532 00:30:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:07.532 00:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:07.532 ************************************ 00:16:07.532 END TEST bdev_error 00:16:07.532 ************************************ 00:16:07.532 00:30:01 -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:16:07.532 00:30:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:07.532 00:30:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:07.532 00:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:07.532 ************************************ 00:16:07.532 START TEST bdev_stat 00:16:07.532 ************************************ 00:16:07.532 00:30:01 -- common/autotest_common.sh@1111 -- # stat_test_suite '' 00:16:07.532 00:30:01 -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:16:07.532 00:30:01 -- bdev/blockdev.sh@596 -- # STAT_PID=119692 00:16:07.532 00:30:01 -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 119692' 00:16:07.532 Process Bdev IO statistics testing pid: 119692 00:16:07.532 00:30:01 -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:16:07.532 00:30:01 -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:16:07.532 00:30:01 -- bdev/blockdev.sh@599 -- # waitforlisten 119692 00:16:07.532 00:30:01 -- common/autotest_common.sh@817 -- # '[' -z 119692 ']' 00:16:07.532 00:30:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.532 00:30:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:07.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.532 00:30:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.532 00:30:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:07.532 00:30:01 -- common/autotest_common.sh@10 -- # set +x 00:16:07.788 [2024-04-24 00:30:01.388020] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:16:07.788 [2024-04-24 00:30:01.388285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119692 ] 00:16:08.045 [2024-04-24 00:30:01.594643] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:08.302 [2024-04-24 00:30:01.913277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.303 [2024-04-24 00:30:01.913278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.869 00:30:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:08.869 00:30:02 -- common/autotest_common.sh@850 -- # return 0 00:16:08.869 00:30:02 -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:16:08.869 00:30:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.869 00:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:08.869 Malloc_STAT 00:16:08.869 00:30:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.869 00:30:02 -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:16:08.869 00:30:02 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_STAT 00:16:08.869 00:30:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:08.869 00:30:02 -- common/autotest_common.sh@887 -- # local i 00:16:08.869 00:30:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:08.869 00:30:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:08.869 00:30:02 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:16:08.869 00:30:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.869 00:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:08.869 00:30:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.869 00:30:02 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:16:08.869 00:30:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:08.869 00:30:02 -- common/autotest_common.sh@10 -- # set +x 00:16:08.869 [ 00:16:08.869 { 00:16:08.869 "name": "Malloc_STAT", 00:16:08.869 "aliases": [ 00:16:08.869 "70867fe0-73ed-43f9-bb82-026e0cd30af6" 00:16:08.869 ], 00:16:08.869 "product_name": "Malloc disk", 00:16:08.869 "block_size": 512, 00:16:08.869 "num_blocks": 262144, 00:16:08.869 "uuid": "70867fe0-73ed-43f9-bb82-026e0cd30af6", 00:16:08.869 "assigned_rate_limits": { 00:16:08.869 "rw_ios_per_sec": 0, 00:16:08.869 "rw_mbytes_per_sec": 0, 00:16:08.869 "r_mbytes_per_sec": 0, 00:16:08.869 "w_mbytes_per_sec": 0 00:16:08.869 }, 00:16:08.869 "claimed": false, 00:16:08.869 "zoned": false, 00:16:08.869 "supported_io_types": { 00:16:08.869 "read": true, 00:16:08.870 "write": true, 00:16:08.870 "unmap": true, 00:16:08.870 "write_zeroes": true, 00:16:08.870 "flush": true, 00:16:08.870 "reset": true, 00:16:08.870 "compare": false, 00:16:08.870 "compare_and_write": false, 00:16:08.870 "abort": true, 00:16:08.870 "nvme_admin": false, 00:16:08.870 "nvme_io": false 00:16:08.870 }, 00:16:08.870 "memory_domains": [ 00:16:08.870 { 00:16:08.870 "dma_device_id": "system", 00:16:08.870 "dma_device_type": 1 00:16:08.870 }, 00:16:08.870 { 00:16:08.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.870 "dma_device_type": 2 00:16:08.870 } 00:16:08.870 ], 00:16:08.870 "driver_specific": {} 00:16:08.870 } 00:16:08.870 ] 00:16:08.870 00:30:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:08.870 00:30:02 -- common/autotest_common.sh@893 -- # return 0 00:16:08.870 00:30:02 -- bdev/blockdev.sh@605 -- # sleep 2 00:16:08.870 00:30:02 -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:09.127 Running I/O for 10 seconds... 00:16:11.072 00:30:04 -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:16:11.072 00:30:04 -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:16:11.072 00:30:04 -- bdev/blockdev.sh@560 -- # local iostats 00:16:11.072 00:30:04 -- bdev/blockdev.sh@561 -- # local io_count1 00:16:11.072 00:30:04 -- bdev/blockdev.sh@562 -- # local io_count2 00:16:11.072 00:30:04 -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:16:11.072 00:30:04 -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:16:11.072 00:30:04 -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:16:11.072 00:30:04 -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:16:11.072 00:30:04 -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:16:11.072 00:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.072 00:30:04 -- common/autotest_common.sh@10 -- # set +x 00:16:11.072 00:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.072 00:30:04 -- bdev/blockdev.sh@568 -- # iostats='{ 00:16:11.072 "tick_rate": 2100000000, 00:16:11.072 "ticks": 1878885360738, 00:16:11.072 "bdevs": [ 00:16:11.072 { 00:16:11.072 "name": "Malloc_STAT", 00:16:11.072 "bytes_read": 780177920, 00:16:11.072 "num_read_ops": 190467, 00:16:11.072 "bytes_written": 0, 00:16:11.072 "num_write_ops": 0, 00:16:11.072 "bytes_unmapped": 0, 00:16:11.072 "num_unmap_ops": 0, 00:16:11.072 "bytes_copied": 0, 00:16:11.072 "num_copy_ops": 0, 00:16:11.072 "read_latency_ticks": 2047395543412, 00:16:11.072 "max_read_latency_ticks": 14503966, 00:16:11.072 "min_read_latency_ticks": 349184, 00:16:11.072 "write_latency_ticks": 0, 00:16:11.072 "max_write_latency_ticks": 0, 00:16:11.072 "min_write_latency_ticks": 0, 00:16:11.072 "unmap_latency_ticks": 0, 00:16:11.072 "max_unmap_latency_ticks": 0, 00:16:11.072 "min_unmap_latency_ticks": 0, 00:16:11.073 "copy_latency_ticks": 0, 00:16:11.073 "max_copy_latency_ticks": 0, 00:16:11.073 "min_copy_latency_ticks": 0, 00:16:11.073 "io_error": {} 00:16:11.073 } 00:16:11.073 ] 00:16:11.073 }' 00:16:11.073 00:30:04 -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:16:11.073 00:30:04 -- bdev/blockdev.sh@569 -- # io_count1=190467 00:16:11.073 00:30:04 -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:16:11.073 00:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.073 00:30:04 -- common/autotest_common.sh@10 -- # set +x 00:16:11.073 00:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.073 00:30:04 -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:16:11.073 "tick_rate": 2100000000, 00:16:11.073 "ticks": 1879000949820, 00:16:11.073 "name": "Malloc_STAT", 00:16:11.073 "channels": [ 00:16:11.073 { 00:16:11.073 "thread_id": 2, 00:16:11.073 "bytes_read": 397410304, 00:16:11.073 "num_read_ops": 97024, 00:16:11.073 "bytes_written": 0, 00:16:11.073 "num_write_ops": 0, 00:16:11.073 "bytes_unmapped": 0, 00:16:11.073 "num_unmap_ops": 0, 00:16:11.073 "bytes_copied": 0, 00:16:11.073 "num_copy_ops": 0, 00:16:11.073 "read_latency_ticks": 1052918093116, 00:16:11.073 "max_read_latency_ticks": 14887700, 00:16:11.073 "min_read_latency_ticks": 8011000, 00:16:11.073 "write_latency_ticks": 0, 00:16:11.073 "max_write_latency_ticks": 0, 00:16:11.073 "min_write_latency_ticks": 0, 00:16:11.073 "unmap_latency_ticks": 0, 00:16:11.073 "max_unmap_latency_ticks": 0, 00:16:11.073 "min_unmap_latency_ticks": 0, 00:16:11.073 "copy_latency_ticks": 0, 00:16:11.073 "max_copy_latency_ticks": 0, 00:16:11.073 "min_copy_latency_ticks": 0 00:16:11.073 }, 00:16:11.073 { 00:16:11.073 "thread_id": 3, 00:16:11.073 "bytes_read": 404750336, 00:16:11.073 "num_read_ops": 98816, 00:16:11.073 "bytes_written": 0, 00:16:11.073 "num_write_ops": 0, 00:16:11.073 "bytes_unmapped": 0, 00:16:11.073 "num_unmap_ops": 0, 00:16:11.073 "bytes_copied": 0, 00:16:11.073 "num_copy_ops": 0, 00:16:11.073 "read_latency_ticks": 1053207013548, 00:16:11.073 "max_read_latency_ticks": 13837576, 00:16:11.073 "min_read_latency_ticks": 7626832, 00:16:11.073 "write_latency_ticks": 0, 00:16:11.073 "max_write_latency_ticks": 0, 00:16:11.073 "min_write_latency_ticks": 0, 00:16:11.073 "unmap_latency_ticks": 0, 00:16:11.073 "max_unmap_latency_ticks": 0, 00:16:11.073 "min_unmap_latency_ticks": 0, 00:16:11.073 "copy_latency_ticks": 0, 00:16:11.073 "max_copy_latency_ticks": 0, 00:16:11.073 "min_copy_latency_ticks": 0 00:16:11.073 } 00:16:11.073 ] 00:16:11.073 }' 00:16:11.073 00:30:04 -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:16:11.073 00:30:04 -- bdev/blockdev.sh@572 -- # io_count_per_channel1=97024 00:16:11.073 00:30:04 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=97024 00:16:11.073 00:30:04 -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:16:11.073 00:30:04 -- bdev/blockdev.sh@574 -- # io_count_per_channel2=98816 00:16:11.073 00:30:04 -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=195840 00:16:11.073 00:30:04 -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:16:11.073 00:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.073 00:30:04 -- common/autotest_common.sh@10 -- # set +x 00:16:11.073 00:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.073 00:30:04 -- bdev/blockdev.sh@577 -- # iostats='{ 00:16:11.073 "tick_rate": 2100000000, 00:16:11.073 "ticks": 1879218392664, 00:16:11.073 "bdevs": [ 00:16:11.073 { 00:16:11.073 "name": "Malloc_STAT", 00:16:11.073 "bytes_read": 844141056, 00:16:11.073 "num_read_ops": 206083, 00:16:11.073 "bytes_written": 0, 00:16:11.073 "num_write_ops": 0, 00:16:11.073 "bytes_unmapped": 0, 00:16:11.073 "num_unmap_ops": 0, 00:16:11.073 "bytes_copied": 0, 00:16:11.073 "num_copy_ops": 0, 00:16:11.073 "read_latency_ticks": 2219365238310, 00:16:11.073 "max_read_latency_ticks": 15536338, 00:16:11.073 "min_read_latency_ticks": 349184, 00:16:11.073 "write_latency_ticks": 0, 00:16:11.073 "max_write_latency_ticks": 0, 00:16:11.073 "min_write_latency_ticks": 0, 00:16:11.073 "unmap_latency_ticks": 0, 00:16:11.073 "max_unmap_latency_ticks": 0, 00:16:11.073 "min_unmap_latency_ticks": 0, 00:16:11.073 "copy_latency_ticks": 0, 00:16:11.073 "max_copy_latency_ticks": 0, 00:16:11.073 "min_copy_latency_ticks": 0, 00:16:11.073 "io_error": {} 00:16:11.073 } 00:16:11.073 ] 00:16:11.073 }' 00:16:11.073 00:30:04 -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:16:11.073 00:30:04 -- bdev/blockdev.sh@578 -- # io_count2=206083 00:16:11.073 00:30:04 -- bdev/blockdev.sh@583 -- # '[' 195840 -lt 190467 ']' 00:16:11.073 00:30:04 -- bdev/blockdev.sh@583 -- # '[' 195840 -gt 206083 ']' 00:16:11.073 00:30:04 -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:16:11.073 00:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.073 00:30:04 -- common/autotest_common.sh@10 -- # set +x 00:16:11.330 00:16:11.330 Latency(us) 00:16:11.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.330 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:16:11.330 Malloc_STAT : 2.13 49220.03 192.27 0.00 0.00 5188.46 1396.54 7427.41 00:16:11.330 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:16:11.330 Malloc_STAT : 2.13 50411.42 196.92 0.00 0.00 5066.29 1139.08 6772.05 00:16:11.330 =================================================================================================================== 00:16:11.330 Total : 99631.45 389.19 0.00 0.00 5126.64 1139.08 7427.41 00:16:11.330 0 00:16:11.330 00:30:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.330 00:30:05 -- bdev/blockdev.sh@609 -- # killprocess 119692 00:16:11.330 00:30:05 -- common/autotest_common.sh@936 -- # '[' -z 119692 ']' 00:16:11.330 00:30:05 -- common/autotest_common.sh@940 -- # kill -0 119692 00:16:11.330 00:30:05 -- common/autotest_common.sh@941 -- # uname 00:16:11.330 00:30:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:11.330 00:30:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119692 00:16:11.330 00:30:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:11.330 killing process with pid 119692 00:16:11.330 00:30:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:11.330 00:30:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119692' 00:16:11.330 00:30:05 -- common/autotest_common.sh@955 -- # kill 119692 00:16:11.330 00:30:05 -- common/autotest_common.sh@960 -- # wait 119692 00:16:11.330 Received shutdown signal, test time was about 2.329008 seconds 00:16:11.330 00:16:11.330 Latency(us) 00:16:11.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.330 =================================================================================================================== 00:16:11.330 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:13.227 00:30:06 -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:16:13.227 00:16:13.227 real 0m5.634s 00:16:13.227 user 0m10.276s 00:16:13.227 sys 0m0.605s 00:16:13.227 ************************************ 00:16:13.227 END TEST bdev_stat 00:16:13.227 ************************************ 00:16:13.227 00:30:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:13.227 00:30:06 -- common/autotest_common.sh@10 -- # set +x 00:16:13.227 00:30:06 -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:16:13.227 00:30:06 -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:16:13.227 00:30:06 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:16:13.227 00:30:06 -- bdev/blockdev.sh@811 -- # cleanup 00:16:13.227 00:30:06 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:13.227 00:30:06 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:13.228 00:30:06 -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:16:13.228 00:30:06 -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:16:13.228 00:30:06 -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:16:13.228 00:30:06 -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:16:13.228 00:16:13.228 real 2m43.680s 00:16:13.228 user 6m14.101s 00:16:13.228 sys 0m25.591s 00:16:13.228 00:30:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:13.228 00:30:06 -- common/autotest_common.sh@10 -- # set +x 00:16:13.228 ************************************ 00:16:13.228 END TEST blockdev_general 00:16:13.228 ************************************ 00:16:13.487 00:30:07 -- spdk/autotest.sh@186 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:16:13.487 00:30:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:13.487 00:30:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.487 00:30:07 -- common/autotest_common.sh@10 -- # set +x 00:16:13.487 ************************************ 00:16:13.487 START TEST bdev_raid 00:16:13.487 ************************************ 00:16:13.487 00:30:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:16:13.487 * Looking for test storage... 00:16:13.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:13.487 00:30:07 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:13.487 00:30:07 -- bdev/nbd_common.sh@6 -- # set -e 00:16:13.487 00:30:07 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:16:13.487 00:30:07 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:16:13.487 00:30:07 -- bdev/bdev_raid.sh@716 -- # uname -s 00:16:13.487 00:30:07 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:16:13.487 00:30:07 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:16:13.487 00:30:07 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:16:13.487 00:30:07 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:16:13.487 00:30:07 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:16:13.487 00:30:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:13.487 00:30:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.487 00:30:07 -- common/autotest_common.sh@10 -- # set +x 00:16:13.744 ************************************ 00:16:13.744 START TEST raid_function_test_raid0 00:16:13.744 ************************************ 00:16:13.744 00:30:07 -- common/autotest_common.sh@1111 -- # raid_function_test raid0 00:16:13.744 00:30:07 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:16:13.744 00:30:07 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:16:13.744 00:30:07 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:16:13.744 00:30:07 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:13.744 00:30:07 -- bdev/bdev_raid.sh@86 -- # raid_pid=119868 00:16:13.744 Process raid pid: 119868 00:16:13.744 00:30:07 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 119868' 00:16:13.744 00:30:07 -- bdev/bdev_raid.sh@88 -- # waitforlisten 119868 /var/tmp/spdk-raid.sock 00:16:13.744 00:30:07 -- common/autotest_common.sh@817 -- # '[' -z 119868 ']' 00:16:13.744 00:30:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:13.744 00:30:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:13.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:13.744 00:30:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:13.744 00:30:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:13.744 00:30:07 -- common/autotest_common.sh@10 -- # set +x 00:16:13.744 [2024-04-24 00:30:07.349581] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:16:13.744 [2024-04-24 00:30:07.349754] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.744 [2024-04-24 00:30:07.524195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.311 [2024-04-24 00:30:07.802834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.311 [2024-04-24 00:30:08.052432] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.569 00:30:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:14.569 00:30:08 -- common/autotest_common.sh@850 -- # return 0 00:16:14.569 00:30:08 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:16:14.569 00:30:08 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:16:14.569 00:30:08 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:14.569 00:30:08 -- bdev/bdev_raid.sh@70 -- # cat 00:16:14.569 00:30:08 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:16:15.138 [2024-04-24 00:30:08.633901] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:15.138 [2024-04-24 00:30:08.636370] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:15.138 [2024-04-24 00:30:08.636603] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:16:15.138 [2024-04-24 00:30:08.636725] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:15.138 [2024-04-24 00:30:08.636948] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:15.138 [2024-04-24 00:30:08.637408] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:16:15.138 [2024-04-24 00:30:08.637534] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000010e00 00:16:15.138 [2024-04-24 00:30:08.637885] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.138 Base_1 00:16:15.138 Base_2 00:16:15.138 00:30:08 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:15.138 00:30:08 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:15.138 00:30:08 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:16:15.395 00:30:08 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:16:15.395 00:30:08 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:16:15.395 00:30:08 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:16:15.395 00:30:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:15.395 00:30:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:16:15.395 00:30:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:15.395 00:30:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:15.395 00:30:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:15.395 00:30:08 -- bdev/nbd_common.sh@12 -- # local i 00:16:15.395 00:30:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:15.395 00:30:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:15.395 00:30:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:16:15.652 [2024-04-24 00:30:09.226167] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:15.652 /dev/nbd0 00:16:15.652 00:30:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:15.652 00:30:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:15.652 00:30:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:16:15.652 00:30:09 -- common/autotest_common.sh@855 -- # local i 00:16:15.652 00:30:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:15.652 00:30:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:15.652 00:30:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:16:15.653 00:30:09 -- common/autotest_common.sh@859 -- # break 00:16:15.653 00:30:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:15.653 00:30:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:15.653 00:30:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.653 1+0 records in 00:16:15.653 1+0 records out 00:16:15.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033978 s, 12.1 MB/s 00:16:15.653 00:30:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.653 00:30:09 -- common/autotest_common.sh@872 -- # size=4096 00:16:15.653 00:30:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.653 00:30:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:15.653 00:30:09 -- common/autotest_common.sh@875 -- # return 0 00:16:15.653 00:30:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.653 00:30:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:15.653 00:30:09 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:15.653 00:30:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:15.653 00:30:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:15.910 00:30:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:15.910 { 00:16:15.910 "nbd_device": "/dev/nbd0", 00:16:15.910 "bdev_name": "raid" 00:16:15.910 } 00:16:15.910 ]' 00:16:15.910 00:30:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:15.911 00:30:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:15.911 { 00:16:15.911 "nbd_device": "/dev/nbd0", 00:16:15.911 "bdev_name": "raid" 00:16:15.911 } 00:16:15.911 ]' 00:16:15.911 00:30:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:15.911 00:30:09 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:15.911 00:30:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:15.911 00:30:09 -- bdev/nbd_common.sh@65 -- # count=1 00:16:15.911 00:30:09 -- bdev/nbd_common.sh@66 -- # echo 1 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@98 -- # count=1 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@20 -- # local blksize 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:16:15.911 4096+0 records in 00:16:15.911 4096+0 records out 00:16:15.911 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.023263 s, 90.1 MB/s 00:16:15.911 00:30:09 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:16.169 4096+0 records in 00:16:16.169 4096+0 records out 00:16:16.169 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.307863 s, 6.8 MB/s 00:16:16.169 00:30:09 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:16:16.427 00:30:09 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:16.427 00:30:09 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:16:16.427 00:30:09 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:16.427 00:30:09 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:16:16.427 00:30:09 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:16:16.427 00:30:09 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:16.427 128+0 records in 00:16:16.427 128+0 records out 00:16:16.427 65536 bytes (66 kB, 64 KiB) copied, 0.000972362 s, 67.4 MB/s 00:16:16.427 00:30:09 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:16.427 00:30:09 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:16.427 00:30:09 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:16.427 00:30:09 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:16.427 00:30:09 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:16.427 00:30:09 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:16.427 2035+0 records in 00:16:16.427 2035+0 records out 00:16:16.427 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00793341 s, 131 MB/s 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:16.427 456+0 records in 00:16:16.427 456+0 records out 00:16:16.427 233472 bytes (233 kB, 228 KiB) copied, 0.00260824 s, 89.5 MB/s 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@53 -- # return 0 00:16:16.427 00:30:10 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:16:16.427 00:30:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:16.427 00:30:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:16.427 00:30:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:16.427 00:30:10 -- bdev/nbd_common.sh@51 -- # local i 00:16:16.427 00:30:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.427 00:30:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:16:16.685 [2024-04-24 00:30:10.383201] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.685 00:30:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:16.685 00:30:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:16.685 00:30:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:16.685 00:30:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.685 00:30:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.685 00:30:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:16.685 00:30:10 -- bdev/nbd_common.sh@41 -- # break 00:16:16.685 00:30:10 -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.685 00:30:10 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:16.685 00:30:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:16.685 00:30:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:16.942 00:30:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:16.942 00:30:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:16.942 00:30:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:16.942 00:30:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:16.942 00:30:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:16.942 00:30:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:16.942 00:30:10 -- bdev/nbd_common.sh@65 -- # true 00:16:16.942 00:30:10 -- bdev/nbd_common.sh@65 -- # count=0 00:16:16.942 00:30:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:16.942 00:30:10 -- bdev/bdev_raid.sh@106 -- # count=0 00:16:16.942 00:30:10 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:16:16.942 00:30:10 -- bdev/bdev_raid.sh@111 -- # killprocess 119868 00:16:16.942 00:30:10 -- common/autotest_common.sh@936 -- # '[' -z 119868 ']' 00:16:16.942 00:30:10 -- common/autotest_common.sh@940 -- # kill -0 119868 00:16:16.942 00:30:10 -- common/autotest_common.sh@941 -- # uname 00:16:16.942 00:30:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.942 00:30:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119868 00:16:16.942 00:30:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:16.942 00:30:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:16.942 00:30:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119868' 00:16:16.942 killing process with pid 119868 00:16:16.942 00:30:10 -- common/autotest_common.sh@955 -- # kill 119868 00:16:16.942 [2024-04-24 00:30:10.723565] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.942 [2024-04-24 00:30:10.723713] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.942 00:30:10 -- common/autotest_common.sh@960 -- # wait 119868 00:16:16.942 [2024-04-24 00:30:10.723780] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.942 [2024-04-24 00:30:10.723791] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid, state offline 00:16:17.223 [2024-04-24 00:30:10.943796] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.123 ************************************ 00:16:19.123 END TEST raid_function_test_raid0 00:16:19.123 ************************************ 00:16:19.123 00:30:12 -- bdev/bdev_raid.sh@113 -- # return 0 00:16:19.123 00:16:19.123 real 0m5.131s 00:16:19.123 user 0m6.351s 00:16:19.123 sys 0m1.185s 00:16:19.123 00:30:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:19.123 00:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:19.123 00:30:12 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:16:19.123 00:30:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:19.123 00:30:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.123 00:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:19.123 ************************************ 00:16:19.123 START TEST raid_function_test_concat 00:16:19.123 ************************************ 00:16:19.123 00:30:12 -- common/autotest_common.sh@1111 -- # raid_function_test concat 00:16:19.123 00:30:12 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:16:19.123 00:30:12 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:16:19.123 00:30:12 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:16:19.123 00:30:12 -- bdev/bdev_raid.sh@86 -- # raid_pid=120041 00:16:19.123 00:30:12 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 120041' 00:16:19.123 Process raid pid: 120041 00:16:19.123 00:30:12 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:19.123 00:30:12 -- bdev/bdev_raid.sh@88 -- # waitforlisten 120041 /var/tmp/spdk-raid.sock 00:16:19.123 00:30:12 -- common/autotest_common.sh@817 -- # '[' -z 120041 ']' 00:16:19.123 00:30:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:19.123 00:30:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:19.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:19.123 00:30:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:19.123 00:30:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:19.123 00:30:12 -- common/autotest_common.sh@10 -- # set +x 00:16:19.123 [2024-04-24 00:30:12.562094] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:16:19.123 [2024-04-24 00:30:12.562254] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.123 [2024-04-24 00:30:12.740204] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.382 [2024-04-24 00:30:12.971576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.640 [2024-04-24 00:30:13.211755] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.898 00:30:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:19.898 00:30:13 -- common/autotest_common.sh@850 -- # return 0 00:16:19.898 00:30:13 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:16:19.898 00:30:13 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:16:19.898 00:30:13 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:19.898 00:30:13 -- bdev/bdev_raid.sh@70 -- # cat 00:16:19.898 00:30:13 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:16:20.157 [2024-04-24 00:30:13.862498] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:20.157 [2024-04-24 00:30:13.864796] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:20.157 [2024-04-24 00:30:13.864886] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:16:20.157 [2024-04-24 00:30:13.864897] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:20.157 [2024-04-24 00:30:13.865054] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:20.157 [2024-04-24 00:30:13.865419] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:16:20.157 [2024-04-24 00:30:13.865441] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000010e00 00:16:20.157 [2024-04-24 00:30:13.865627] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.157 Base_1 00:16:20.157 Base_2 00:16:20.157 00:30:13 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:16:20.157 00:30:13 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:16:20.157 00:30:13 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:20.416 00:30:14 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:16:20.416 00:30:14 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:16:20.416 00:30:14 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:16:20.416 00:30:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:20.416 00:30:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:16:20.416 00:30:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:20.416 00:30:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:20.416 00:30:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:20.416 00:30:14 -- bdev/nbd_common.sh@12 -- # local i 00:16:20.416 00:30:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:20.416 00:30:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.416 00:30:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:16:20.674 [2024-04-24 00:30:14.442621] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:20.955 /dev/nbd0 00:16:20.955 00:30:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:20.955 00:30:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:20.955 00:30:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:16:20.955 00:30:14 -- common/autotest_common.sh@855 -- # local i 00:16:20.955 00:30:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:20.955 00:30:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:20.955 00:30:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:16:20.955 00:30:14 -- common/autotest_common.sh@859 -- # break 00:16:20.955 00:30:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:20.955 00:30:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:20.955 00:30:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.955 1+0 records in 00:16:20.955 1+0 records out 00:16:20.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327238 s, 12.5 MB/s 00:16:20.955 00:30:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.955 00:30:14 -- common/autotest_common.sh@872 -- # size=4096 00:16:20.955 00:30:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.955 00:30:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:20.955 00:30:14 -- common/autotest_common.sh@875 -- # return 0 00:16:20.955 00:30:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.955 00:30:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.955 00:30:14 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:20.955 00:30:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:20.955 00:30:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:21.212 00:30:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:21.212 { 00:16:21.212 "nbd_device": "/dev/nbd0", 00:16:21.212 "bdev_name": "raid" 00:16:21.212 } 00:16:21.212 ]' 00:16:21.212 00:30:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:21.212 { 00:16:21.212 "nbd_device": "/dev/nbd0", 00:16:21.212 "bdev_name": "raid" 00:16:21.212 } 00:16:21.212 ]' 00:16:21.212 00:30:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:21.212 00:30:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:21.212 00:30:14 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:21.212 00:30:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:21.212 00:30:14 -- bdev/nbd_common.sh@65 -- # count=1 00:16:21.212 00:30:14 -- bdev/nbd_common.sh@66 -- # echo 1 00:16:21.212 00:30:14 -- bdev/bdev_raid.sh@98 -- # count=1 00:16:21.212 00:30:14 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:16:21.212 00:30:14 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@20 -- # local blksize 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:16:21.213 4096+0 records in 00:16:21.213 4096+0 records out 00:16:21.213 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0341496 s, 61.4 MB/s 00:16:21.213 00:30:14 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:16:21.470 4096+0 records in 00:16:21.470 4096+0 records out 00:16:21.470 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.238728 s, 8.8 MB/s 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:16:21.470 128+0 records in 00:16:21.470 128+0 records out 00:16:21.470 65536 bytes (66 kB, 64 KiB) copied, 0.000661717 s, 99.0 MB/s 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:16:21.470 2035+0 records in 00:16:21.470 2035+0 records out 00:16:21.470 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00828334 s, 126 MB/s 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:16:21.470 456+0 records in 00:16:21.470 456+0 records out 00:16:21.470 233472 bytes (233 kB, 228 KiB) copied, 0.00178288 s, 131 MB/s 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@53 -- # return 0 00:16:21.470 00:30:15 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:16:21.470 00:30:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:21.470 00:30:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:21.470 00:30:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:21.470 00:30:15 -- bdev/nbd_common.sh@51 -- # local i 00:16:21.470 00:30:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:21.470 00:30:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:16:21.728 [2024-04-24 00:30:15.463576] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.728 00:30:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:21.728 00:30:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:21.728 00:30:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:21.728 00:30:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.728 00:30:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.728 00:30:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:21.728 00:30:15 -- bdev/nbd_common.sh@41 -- # break 00:16:21.728 00:30:15 -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.728 00:30:15 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:16:21.728 00:30:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:16:21.728 00:30:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:16:21.985 00:30:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:21.985 00:30:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:21.985 00:30:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:21.985 00:30:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:21.985 00:30:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:21.985 00:30:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:21.985 00:30:15 -- bdev/nbd_common.sh@65 -- # true 00:16:21.985 00:30:15 -- bdev/nbd_common.sh@65 -- # count=0 00:16:21.985 00:30:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:21.985 00:30:15 -- bdev/bdev_raid.sh@106 -- # count=0 00:16:21.985 00:30:15 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:16:21.985 00:30:15 -- bdev/bdev_raid.sh@111 -- # killprocess 120041 00:16:21.985 00:30:15 -- common/autotest_common.sh@936 -- # '[' -z 120041 ']' 00:16:21.985 00:30:15 -- common/autotest_common.sh@940 -- # kill -0 120041 00:16:21.985 00:30:15 -- common/autotest_common.sh@941 -- # uname 00:16:21.985 00:30:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:21.985 00:30:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120041 00:16:21.985 00:30:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:21.985 killing process with pid 120041 00:16:21.985 00:30:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:21.985 00:30:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120041' 00:16:21.985 00:30:15 -- common/autotest_common.sh@955 -- # kill 120041 00:16:21.985 00:30:15 -- common/autotest_common.sh@960 -- # wait 120041 00:16:21.985 [2024-04-24 00:30:15.764013] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.985 [2024-04-24 00:30:15.764118] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.985 [2024-04-24 00:30:15.764168] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.985 [2024-04-24 00:30:15.764177] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid, state offline 00:16:22.243 [2024-04-24 00:30:15.984825] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.145 00:30:17 -- bdev/bdev_raid.sh@113 -- # return 0 00:16:24.145 00:16:24.145 real 0m4.932s 00:16:24.145 user 0m6.222s 00:16:24.145 sys 0m0.968s 00:16:24.145 00:30:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:24.145 00:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:24.145 ************************************ 00:16:24.145 END TEST raid_function_test_concat 00:16:24.145 ************************************ 00:16:24.145 00:30:17 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:16:24.145 00:30:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:24.145 00:30:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.145 00:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:24.145 ************************************ 00:16:24.145 START TEST raid0_resize_test 00:16:24.145 ************************************ 00:16:24.145 00:30:17 -- common/autotest_common.sh@1111 -- # raid0_resize_test 00:16:24.145 00:30:17 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:16:24.145 00:30:17 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:16:24.145 00:30:17 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:16:24.145 00:30:17 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:16:24.145 00:30:17 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:16:24.145 00:30:17 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:16:24.145 00:30:17 -- bdev/bdev_raid.sh@301 -- # raid_pid=120208 00:16:24.145 00:30:17 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:24.145 00:30:17 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 120208' 00:16:24.145 Process raid pid: 120208 00:16:24.145 00:30:17 -- bdev/bdev_raid.sh@303 -- # waitforlisten 120208 /var/tmp/spdk-raid.sock 00:16:24.145 00:30:17 -- common/autotest_common.sh@817 -- # '[' -z 120208 ']' 00:16:24.145 00:30:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:24.145 00:30:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:24.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:24.145 00:30:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:24.145 00:30:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:24.145 00:30:17 -- common/autotest_common.sh@10 -- # set +x 00:16:24.145 [2024-04-24 00:30:17.581040] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:16:24.145 [2024-04-24 00:30:17.581213] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.145 [2024-04-24 00:30:17.748951] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.404 [2024-04-24 00:30:17.965965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.404 [2024-04-24 00:30:18.185219] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.969 00:30:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:24.969 00:30:18 -- common/autotest_common.sh@850 -- # return 0 00:16:24.969 00:30:18 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:16:25.236 Base_1 00:16:25.236 00:30:18 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:16:25.522 Base_2 00:16:25.522 00:30:19 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:16:25.522 [2024-04-24 00:30:19.248526] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:16:25.522 [2024-04-24 00:30:19.250970] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:16:25.522 [2024-04-24 00:30:19.251189] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:16:25.522 [2024-04-24 00:30:19.251281] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:25.522 [2024-04-24 00:30:19.251474] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005450 00:16:25.522 [2024-04-24 00:30:19.251843] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:16:25.522 [2024-04-24 00:30:19.251955] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000010e00 00:16:25.522 [2024-04-24 00:30:19.252244] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.522 00:30:19 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:16:25.779 [2024-04-24 00:30:19.520736] bdev_raid.c:2222:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:25.779 [2024-04-24 00:30:19.521039] bdev_raid.c:2235:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:16:25.779 true 00:16:25.779 00:30:19 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:16:25.779 00:30:19 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:16:26.037 [2024-04-24 00:30:19.728818] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.037 00:30:19 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:16:26.037 00:30:19 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:16:26.037 00:30:19 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:16:26.037 00:30:19 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:16:26.295 [2024-04-24 00:30:19.952753] bdev_raid.c:2222:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:16:26.295 [2024-04-24 00:30:19.952987] bdev_raid.c:2235:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:16:26.295 [2024-04-24 00:30:19.953228] bdev_raid.c:2249:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:16:26.295 true 00:16:26.295 00:30:19 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:16:26.295 00:30:19 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:16:26.553 [2024-04-24 00:30:20.220893] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.553 00:30:20 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:16:26.553 00:30:20 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:16:26.553 00:30:20 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:16:26.553 00:30:20 -- bdev/bdev_raid.sh@332 -- # killprocess 120208 00:16:26.553 00:30:20 -- common/autotest_common.sh@936 -- # '[' -z 120208 ']' 00:16:26.553 00:30:20 -- common/autotest_common.sh@940 -- # kill -0 120208 00:16:26.553 00:30:20 -- common/autotest_common.sh@941 -- # uname 00:16:26.553 00:30:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:26.553 00:30:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120208 00:16:26.553 00:30:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:26.553 00:30:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:26.553 00:30:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120208' 00:16:26.553 killing process with pid 120208 00:16:26.553 00:30:20 -- common/autotest_common.sh@955 -- # kill 120208 00:16:26.553 [2024-04-24 00:30:20.269967] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:26.553 00:30:20 -- common/autotest_common.sh@960 -- # wait 120208 00:16:26.553 [2024-04-24 00:30:20.270148] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.553 [2024-04-24 00:30:20.270228] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.553 [2024-04-24 00:30:20.270278] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Raid, state offline 00:16:26.553 [2024-04-24 00:30:20.270983] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.926 00:30:21 -- bdev/bdev_raid.sh@334 -- # return 0 00:16:27.926 00:16:27.926 real 0m4.187s 00:16:27.926 user 0m5.810s 00:16:27.926 sys 0m0.559s 00:16:27.926 00:30:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:27.926 00:30:21 -- common/autotest_common.sh@10 -- # set +x 00:16:27.926 ************************************ 00:16:27.926 END TEST raid0_resize_test 00:16:27.926 ************************************ 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:16:28.184 00:30:21 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:28.184 00:30:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.184 00:30:21 -- common/autotest_common.sh@10 -- # set +x 00:16:28.184 ************************************ 00:16:28.184 START TEST raid_state_function_test 00:16:28.184 ************************************ 00:16:28.184 00:30:21 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 2 false 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@226 -- # raid_pid=120308 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120308' 00:16:28.184 Process raid pid: 120308 00:16:28.184 00:30:21 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120308 /var/tmp/spdk-raid.sock 00:16:28.184 00:30:21 -- common/autotest_common.sh@817 -- # '[' -z 120308 ']' 00:16:28.184 00:30:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:28.184 00:30:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:28.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:28.184 00:30:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:28.184 00:30:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:28.184 00:30:21 -- common/autotest_common.sh@10 -- # set +x 00:16:28.184 [2024-04-24 00:30:21.873675] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:16:28.184 [2024-04-24 00:30:21.873847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.443 [2024-04-24 00:30:22.044081] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.701 [2024-04-24 00:30:22.316259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.967 [2024-04-24 00:30:22.535006] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.967 00:30:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:28.967 00:30:22 -- common/autotest_common.sh@850 -- # return 0 00:16:28.967 00:30:22 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:29.224 [2024-04-24 00:30:22.979248] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.225 [2024-04-24 00:30:22.979350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.225 [2024-04-24 00:30:22.979363] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.225 [2024-04-24 00:30:22.979384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.225 00:30:22 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:29.225 00:30:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:29.225 00:30:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:29.225 00:30:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:29.225 00:30:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:29.225 00:30:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:29.225 00:30:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:29.225 00:30:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:29.225 00:30:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:29.225 00:30:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:29.225 00:30:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.225 00:30:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.798 00:30:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:29.798 "name": "Existed_Raid", 00:16:29.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.798 "strip_size_kb": 64, 00:16:29.798 "state": "configuring", 00:16:29.798 "raid_level": "raid0", 00:16:29.798 "superblock": false, 00:16:29.798 "num_base_bdevs": 2, 00:16:29.799 "num_base_bdevs_discovered": 0, 00:16:29.799 "num_base_bdevs_operational": 2, 00:16:29.799 "base_bdevs_list": [ 00:16:29.799 { 00:16:29.799 "name": "BaseBdev1", 00:16:29.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.799 "is_configured": false, 00:16:29.799 "data_offset": 0, 00:16:29.799 "data_size": 0 00:16:29.799 }, 00:16:29.799 { 00:16:29.799 "name": "BaseBdev2", 00:16:29.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.799 "is_configured": false, 00:16:29.799 "data_offset": 0, 00:16:29.799 "data_size": 0 00:16:29.799 } 00:16:29.799 ] 00:16:29.799 }' 00:16:29.799 00:30:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:29.799 00:30:23 -- common/autotest_common.sh@10 -- # set +x 00:16:30.364 00:30:24 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:30.623 [2024-04-24 00:30:24.308519] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.623 [2024-04-24 00:30:24.308587] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:16:30.623 00:30:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:30.882 [2024-04-24 00:30:24.536584] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.882 [2024-04-24 00:30:24.536699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.882 [2024-04-24 00:30:24.536712] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.882 [2024-04-24 00:30:24.536747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.882 00:30:24 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:31.140 [2024-04-24 00:30:24.830569] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.140 BaseBdev1 00:16:31.140 00:30:24 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:31.140 00:30:24 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:31.141 00:30:24 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:31.141 00:30:24 -- common/autotest_common.sh@887 -- # local i 00:16:31.141 00:30:24 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:31.141 00:30:24 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:31.141 00:30:24 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:31.398 00:30:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:31.665 [ 00:16:31.665 { 00:16:31.665 "name": "BaseBdev1", 00:16:31.665 "aliases": [ 00:16:31.665 "e2734947-3d82-4d16-96a8-e4c0ec084145" 00:16:31.665 ], 00:16:31.665 "product_name": "Malloc disk", 00:16:31.665 "block_size": 512, 00:16:31.665 "num_blocks": 65536, 00:16:31.665 "uuid": "e2734947-3d82-4d16-96a8-e4c0ec084145", 00:16:31.665 "assigned_rate_limits": { 00:16:31.665 "rw_ios_per_sec": 0, 00:16:31.665 "rw_mbytes_per_sec": 0, 00:16:31.665 "r_mbytes_per_sec": 0, 00:16:31.665 "w_mbytes_per_sec": 0 00:16:31.665 }, 00:16:31.665 "claimed": true, 00:16:31.665 "claim_type": "exclusive_write", 00:16:31.665 "zoned": false, 00:16:31.665 "supported_io_types": { 00:16:31.665 "read": true, 00:16:31.665 "write": true, 00:16:31.665 "unmap": true, 00:16:31.665 "write_zeroes": true, 00:16:31.665 "flush": true, 00:16:31.665 "reset": true, 00:16:31.665 "compare": false, 00:16:31.665 "compare_and_write": false, 00:16:31.665 "abort": true, 00:16:31.665 "nvme_admin": false, 00:16:31.665 "nvme_io": false 00:16:31.665 }, 00:16:31.665 "memory_domains": [ 00:16:31.665 { 00:16:31.665 "dma_device_id": "system", 00:16:31.665 "dma_device_type": 1 00:16:31.665 }, 00:16:31.665 { 00:16:31.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.665 "dma_device_type": 2 00:16:31.665 } 00:16:31.665 ], 00:16:31.665 "driver_specific": {} 00:16:31.665 } 00:16:31.665 ] 00:16:31.924 00:30:25 -- common/autotest_common.sh@893 -- # return 0 00:16:31.924 00:30:25 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:31.924 00:30:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:31.924 00:30:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.924 00:30:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:31.924 00:30:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:31.924 00:30:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:31.924 00:30:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.924 00:30:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.924 00:30:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.924 00:30:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.924 00:30:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.924 00:30:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.183 00:30:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.183 "name": "Existed_Raid", 00:16:32.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.183 "strip_size_kb": 64, 00:16:32.183 "state": "configuring", 00:16:32.183 "raid_level": "raid0", 00:16:32.183 "superblock": false, 00:16:32.183 "num_base_bdevs": 2, 00:16:32.183 "num_base_bdevs_discovered": 1, 00:16:32.183 "num_base_bdevs_operational": 2, 00:16:32.183 "base_bdevs_list": [ 00:16:32.183 { 00:16:32.183 "name": "BaseBdev1", 00:16:32.183 "uuid": "e2734947-3d82-4d16-96a8-e4c0ec084145", 00:16:32.183 "is_configured": true, 00:16:32.183 "data_offset": 0, 00:16:32.183 "data_size": 65536 00:16:32.183 }, 00:16:32.183 { 00:16:32.183 "name": "BaseBdev2", 00:16:32.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.183 "is_configured": false, 00:16:32.183 "data_offset": 0, 00:16:32.183 "data_size": 0 00:16:32.183 } 00:16:32.183 ] 00:16:32.183 }' 00:16:32.183 00:30:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.183 00:30:25 -- common/autotest_common.sh@10 -- # set +x 00:16:32.748 00:30:26 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:33.011 [2024-04-24 00:30:26.763174] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.012 [2024-04-24 00:30:26.763267] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:16:33.012 00:30:26 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:33.012 00:30:26 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:33.288 [2024-04-24 00:30:27.055196] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.288 [2024-04-24 00:30:27.057565] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.288 [2024-04-24 00:30:27.057635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.288 00:30:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:33.288 00:30:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:33.288 00:30:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:33.288 00:30:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:33.288 00:30:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.288 00:30:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:33.288 00:30:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:33.288 00:30:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:33.288 00:30:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.547 00:30:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.547 00:30:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.547 00:30:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.547 00:30:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.547 00:30:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.816 00:30:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.816 "name": "Existed_Raid", 00:16:33.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.816 "strip_size_kb": 64, 00:16:33.816 "state": "configuring", 00:16:33.816 "raid_level": "raid0", 00:16:33.816 "superblock": false, 00:16:33.816 "num_base_bdevs": 2, 00:16:33.816 "num_base_bdevs_discovered": 1, 00:16:33.816 "num_base_bdevs_operational": 2, 00:16:33.816 "base_bdevs_list": [ 00:16:33.816 { 00:16:33.816 "name": "BaseBdev1", 00:16:33.816 "uuid": "e2734947-3d82-4d16-96a8-e4c0ec084145", 00:16:33.816 "is_configured": true, 00:16:33.816 "data_offset": 0, 00:16:33.816 "data_size": 65536 00:16:33.816 }, 00:16:33.816 { 00:16:33.816 "name": "BaseBdev2", 00:16:33.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.816 "is_configured": false, 00:16:33.816 "data_offset": 0, 00:16:33.816 "data_size": 0 00:16:33.816 } 00:16:33.816 ] 00:16:33.816 }' 00:16:33.816 00:30:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.816 00:30:27 -- common/autotest_common.sh@10 -- # set +x 00:16:34.382 00:30:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:34.641 [2024-04-24 00:30:28.331453] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.641 [2024-04-24 00:30:28.331522] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:16:34.641 [2024-04-24 00:30:28.331531] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:34.641 [2024-04-24 00:30:28.331649] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:34.641 [2024-04-24 00:30:28.331987] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:16:34.641 [2024-04-24 00:30:28.332007] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:16:34.641 [2024-04-24 00:30:28.332336] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.641 BaseBdev2 00:16:34.641 00:30:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:34.641 00:30:28 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:16:34.641 00:30:28 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:34.641 00:30:28 -- common/autotest_common.sh@887 -- # local i 00:16:34.641 00:30:28 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:34.641 00:30:28 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:34.641 00:30:28 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:34.900 00:30:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:35.159 [ 00:16:35.159 { 00:16:35.159 "name": "BaseBdev2", 00:16:35.159 "aliases": [ 00:16:35.159 "72d2579c-9b9c-4877-ad49-c6421923c0d2" 00:16:35.159 ], 00:16:35.159 "product_name": "Malloc disk", 00:16:35.159 "block_size": 512, 00:16:35.159 "num_blocks": 65536, 00:16:35.159 "uuid": "72d2579c-9b9c-4877-ad49-c6421923c0d2", 00:16:35.159 "assigned_rate_limits": { 00:16:35.159 "rw_ios_per_sec": 0, 00:16:35.159 "rw_mbytes_per_sec": 0, 00:16:35.159 "r_mbytes_per_sec": 0, 00:16:35.159 "w_mbytes_per_sec": 0 00:16:35.159 }, 00:16:35.159 "claimed": true, 00:16:35.159 "claim_type": "exclusive_write", 00:16:35.159 "zoned": false, 00:16:35.159 "supported_io_types": { 00:16:35.159 "read": true, 00:16:35.159 "write": true, 00:16:35.159 "unmap": true, 00:16:35.159 "write_zeroes": true, 00:16:35.159 "flush": true, 00:16:35.159 "reset": true, 00:16:35.159 "compare": false, 00:16:35.159 "compare_and_write": false, 00:16:35.159 "abort": true, 00:16:35.159 "nvme_admin": false, 00:16:35.159 "nvme_io": false 00:16:35.159 }, 00:16:35.159 "memory_domains": [ 00:16:35.159 { 00:16:35.159 "dma_device_id": "system", 00:16:35.159 "dma_device_type": 1 00:16:35.159 }, 00:16:35.159 { 00:16:35.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.159 "dma_device_type": 2 00:16:35.159 } 00:16:35.159 ], 00:16:35.159 "driver_specific": {} 00:16:35.159 } 00:16:35.159 ] 00:16:35.159 00:30:28 -- common/autotest_common.sh@893 -- # return 0 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.159 00:30:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.726 00:30:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.726 "name": "Existed_Raid", 00:16:35.726 "uuid": "e1d7aad1-162f-40fc-bdc1-7f36ad2967b8", 00:16:35.726 "strip_size_kb": 64, 00:16:35.726 "state": "online", 00:16:35.726 "raid_level": "raid0", 00:16:35.726 "superblock": false, 00:16:35.726 "num_base_bdevs": 2, 00:16:35.726 "num_base_bdevs_discovered": 2, 00:16:35.726 "num_base_bdevs_operational": 2, 00:16:35.726 "base_bdevs_list": [ 00:16:35.726 { 00:16:35.726 "name": "BaseBdev1", 00:16:35.726 "uuid": "e2734947-3d82-4d16-96a8-e4c0ec084145", 00:16:35.726 "is_configured": true, 00:16:35.726 "data_offset": 0, 00:16:35.726 "data_size": 65536 00:16:35.726 }, 00:16:35.726 { 00:16:35.726 "name": "BaseBdev2", 00:16:35.726 "uuid": "72d2579c-9b9c-4877-ad49-c6421923c0d2", 00:16:35.726 "is_configured": true, 00:16:35.726 "data_offset": 0, 00:16:35.726 "data_size": 65536 00:16:35.726 } 00:16:35.726 ] 00:16:35.726 }' 00:16:35.726 00:30:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.726 00:30:29 -- common/autotest_common.sh@10 -- # set +x 00:16:36.293 00:30:29 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:36.626 [2024-04-24 00:30:30.160032] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:36.626 [2024-04-24 00:30:30.160072] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.626 [2024-04-24 00:30:30.160122] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.626 00:30:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.627 00:30:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.885 00:30:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:36.885 "name": "Existed_Raid", 00:16:36.885 "uuid": "e1d7aad1-162f-40fc-bdc1-7f36ad2967b8", 00:16:36.885 "strip_size_kb": 64, 00:16:36.885 "state": "offline", 00:16:36.885 "raid_level": "raid0", 00:16:36.885 "superblock": false, 00:16:36.885 "num_base_bdevs": 2, 00:16:36.885 "num_base_bdevs_discovered": 1, 00:16:36.885 "num_base_bdevs_operational": 1, 00:16:36.885 "base_bdevs_list": [ 00:16:36.885 { 00:16:36.885 "name": null, 00:16:36.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.885 "is_configured": false, 00:16:36.885 "data_offset": 0, 00:16:36.885 "data_size": 65536 00:16:36.885 }, 00:16:36.885 { 00:16:36.885 "name": "BaseBdev2", 00:16:36.885 "uuid": "72d2579c-9b9c-4877-ad49-c6421923c0d2", 00:16:36.885 "is_configured": true, 00:16:36.885 "data_offset": 0, 00:16:36.885 "data_size": 65536 00:16:36.885 } 00:16:36.885 ] 00:16:36.885 }' 00:16:36.885 00:30:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:36.885 00:30:30 -- common/autotest_common.sh@10 -- # set +x 00:16:37.459 00:30:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:37.459 00:30:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:37.459 00:30:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.459 00:30:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:38.025 00:30:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:38.025 00:30:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:38.025 00:30:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:38.025 [2024-04-24 00:30:31.719449] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:38.025 [2024-04-24 00:30:31.719543] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:16:38.283 00:30:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:38.283 00:30:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:38.283 00:30:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:38.284 00:30:31 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.542 00:30:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:38.542 00:30:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:38.542 00:30:32 -- bdev/bdev_raid.sh@287 -- # killprocess 120308 00:16:38.542 00:30:32 -- common/autotest_common.sh@936 -- # '[' -z 120308 ']' 00:16:38.542 00:30:32 -- common/autotest_common.sh@940 -- # kill -0 120308 00:16:38.542 00:30:32 -- common/autotest_common.sh@941 -- # uname 00:16:38.542 00:30:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:38.542 00:30:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120308 00:16:38.542 killing process with pid 120308 00:16:38.542 00:30:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:38.542 00:30:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:38.542 00:30:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120308' 00:16:38.542 00:30:32 -- common/autotest_common.sh@955 -- # kill 120308 00:16:38.542 00:30:32 -- common/autotest_common.sh@960 -- # wait 120308 00:16:38.542 [2024-04-24 00:30:32.191112] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:38.542 [2024-04-24 00:30:32.191288] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:39.938 ************************************ 00:16:39.938 END TEST raid_state_function_test 00:16:39.938 ************************************ 00:16:39.938 00:30:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:39.938 00:16:39.938 real 0m11.899s 00:16:39.938 user 0m20.057s 00:16:39.938 sys 0m1.700s 00:16:39.938 00:30:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:39.938 00:30:33 -- common/autotest_common.sh@10 -- # set +x 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:16:40.197 00:30:33 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:40.197 00:30:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:40.197 00:30:33 -- common/autotest_common.sh@10 -- # set +x 00:16:40.197 ************************************ 00:16:40.197 START TEST raid_state_function_test_sb 00:16:40.197 ************************************ 00:16:40.197 00:30:33 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 2 true 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=120651 00:16:40.197 Process raid pid: 120651 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120651' 00:16:40.197 00:30:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120651 /var/tmp/spdk-raid.sock 00:16:40.197 00:30:33 -- common/autotest_common.sh@817 -- # '[' -z 120651 ']' 00:16:40.197 00:30:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:40.197 00:30:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:40.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:40.197 00:30:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:40.197 00:30:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:40.197 00:30:33 -- common/autotest_common.sh@10 -- # set +x 00:16:40.197 [2024-04-24 00:30:33.897612] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:16:40.197 [2024-04-24 00:30:33.897806] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.456 [2024-04-24 00:30:34.078323] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.714 [2024-04-24 00:30:34.345612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.973 [2024-04-24 00:30:34.583013] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.232 00:30:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:41.232 00:30:34 -- common/autotest_common.sh@850 -- # return 0 00:16:41.232 00:30:34 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:41.491 [2024-04-24 00:30:35.214510] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.491 [2024-04-24 00:30:35.214602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.491 [2024-04-24 00:30:35.214614] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.491 [2024-04-24 00:30:35.214632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.491 00:30:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:41.491 00:30:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.491 00:30:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:41.491 00:30:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:41.491 00:30:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:41.491 00:30:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:41.491 00:30:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.491 00:30:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.491 00:30:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.491 00:30:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.491 00:30:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.491 00:30:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.749 00:30:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.749 "name": "Existed_Raid", 00:16:41.749 "uuid": "02d85cff-c2d4-482d-911b-f9b9d671e072", 00:16:41.749 "strip_size_kb": 64, 00:16:41.749 "state": "configuring", 00:16:41.749 "raid_level": "raid0", 00:16:41.749 "superblock": true, 00:16:41.749 "num_base_bdevs": 2, 00:16:41.749 "num_base_bdevs_discovered": 0, 00:16:41.749 "num_base_bdevs_operational": 2, 00:16:41.749 "base_bdevs_list": [ 00:16:41.749 { 00:16:41.749 "name": "BaseBdev1", 00:16:41.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.749 "is_configured": false, 00:16:41.749 "data_offset": 0, 00:16:41.749 "data_size": 0 00:16:41.749 }, 00:16:41.749 { 00:16:41.749 "name": "BaseBdev2", 00:16:41.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.749 "is_configured": false, 00:16:41.749 "data_offset": 0, 00:16:41.749 "data_size": 0 00:16:41.749 } 00:16:41.749 ] 00:16:41.749 }' 00:16:41.749 00:30:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.749 00:30:35 -- common/autotest_common.sh@10 -- # set +x 00:16:42.683 00:30:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:42.683 [2024-04-24 00:30:36.446629] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.683 [2024-04-24 00:30:36.446686] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:16:42.683 00:30:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:43.273 [2024-04-24 00:30:36.738742] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:43.273 [2024-04-24 00:30:36.738854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:43.273 [2024-04-24 00:30:36.738868] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:43.273 [2024-04-24 00:30:36.738903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:43.273 00:30:36 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:43.273 [2024-04-24 00:30:37.008209] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.273 BaseBdev1 00:16:43.273 00:30:37 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:43.273 00:30:37 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:43.273 00:30:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:43.273 00:30:37 -- common/autotest_common.sh@887 -- # local i 00:16:43.273 00:30:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:43.273 00:30:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:43.273 00:30:37 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:43.841 00:30:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:43.841 [ 00:16:43.841 { 00:16:43.841 "name": "BaseBdev1", 00:16:43.841 "aliases": [ 00:16:43.841 "eaa6eb5a-aff6-4e25-924f-47b068a1e046" 00:16:43.841 ], 00:16:43.841 "product_name": "Malloc disk", 00:16:43.841 "block_size": 512, 00:16:43.841 "num_blocks": 65536, 00:16:43.841 "uuid": "eaa6eb5a-aff6-4e25-924f-47b068a1e046", 00:16:43.841 "assigned_rate_limits": { 00:16:43.841 "rw_ios_per_sec": 0, 00:16:43.841 "rw_mbytes_per_sec": 0, 00:16:43.841 "r_mbytes_per_sec": 0, 00:16:43.841 "w_mbytes_per_sec": 0 00:16:43.841 }, 00:16:43.841 "claimed": true, 00:16:43.841 "claim_type": "exclusive_write", 00:16:43.841 "zoned": false, 00:16:43.841 "supported_io_types": { 00:16:43.841 "read": true, 00:16:43.841 "write": true, 00:16:43.841 "unmap": true, 00:16:43.841 "write_zeroes": true, 00:16:43.841 "flush": true, 00:16:43.841 "reset": true, 00:16:43.841 "compare": false, 00:16:43.841 "compare_and_write": false, 00:16:43.841 "abort": true, 00:16:43.841 "nvme_admin": false, 00:16:43.841 "nvme_io": false 00:16:43.841 }, 00:16:43.841 "memory_domains": [ 00:16:43.841 { 00:16:43.841 "dma_device_id": "system", 00:16:43.841 "dma_device_type": 1 00:16:43.841 }, 00:16:43.841 { 00:16:43.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.841 "dma_device_type": 2 00:16:43.841 } 00:16:43.841 ], 00:16:43.841 "driver_specific": {} 00:16:43.841 } 00:16:43.841 ] 00:16:43.841 00:30:37 -- common/autotest_common.sh@893 -- # return 0 00:16:43.841 00:30:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:43.841 00:30:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:43.841 00:30:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:43.841 00:30:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:43.841 00:30:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:43.841 00:30:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:43.841 00:30:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:43.841 00:30:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:43.841 00:30:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:43.841 00:30:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:43.841 00:30:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.841 00:30:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.407 00:30:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:44.407 "name": "Existed_Raid", 00:16:44.407 "uuid": "b2c7b38d-ce21-48bb-90ba-6b12b31eb858", 00:16:44.407 "strip_size_kb": 64, 00:16:44.407 "state": "configuring", 00:16:44.407 "raid_level": "raid0", 00:16:44.407 "superblock": true, 00:16:44.407 "num_base_bdevs": 2, 00:16:44.407 "num_base_bdevs_discovered": 1, 00:16:44.407 "num_base_bdevs_operational": 2, 00:16:44.407 "base_bdevs_list": [ 00:16:44.407 { 00:16:44.407 "name": "BaseBdev1", 00:16:44.407 "uuid": "eaa6eb5a-aff6-4e25-924f-47b068a1e046", 00:16:44.407 "is_configured": true, 00:16:44.407 "data_offset": 2048, 00:16:44.407 "data_size": 63488 00:16:44.407 }, 00:16:44.407 { 00:16:44.407 "name": "BaseBdev2", 00:16:44.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.407 "is_configured": false, 00:16:44.407 "data_offset": 0, 00:16:44.407 "data_size": 0 00:16:44.407 } 00:16:44.407 ] 00:16:44.407 }' 00:16:44.407 00:30:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:44.407 00:30:37 -- common/autotest_common.sh@10 -- # set +x 00:16:44.971 00:30:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:45.228 [2024-04-24 00:30:38.904772] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:45.228 [2024-04-24 00:30:38.904837] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:16:45.228 00:30:38 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:45.228 00:30:38 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:45.795 00:30:39 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:46.055 BaseBdev1 00:16:46.055 00:30:39 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:46.055 00:30:39 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:46.055 00:30:39 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:46.055 00:30:39 -- common/autotest_common.sh@887 -- # local i 00:16:46.055 00:30:39 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:46.055 00:30:39 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:46.055 00:30:39 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:46.313 00:30:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:46.879 [ 00:16:46.879 { 00:16:46.879 "name": "BaseBdev1", 00:16:46.879 "aliases": [ 00:16:46.879 "62bb2085-d7d9-45b3-9aa5-dd4be4d0a8d4" 00:16:46.879 ], 00:16:46.879 "product_name": "Malloc disk", 00:16:46.879 "block_size": 512, 00:16:46.879 "num_blocks": 65536, 00:16:46.880 "uuid": "62bb2085-d7d9-45b3-9aa5-dd4be4d0a8d4", 00:16:46.880 "assigned_rate_limits": { 00:16:46.880 "rw_ios_per_sec": 0, 00:16:46.880 "rw_mbytes_per_sec": 0, 00:16:46.880 "r_mbytes_per_sec": 0, 00:16:46.880 "w_mbytes_per_sec": 0 00:16:46.880 }, 00:16:46.880 "claimed": false, 00:16:46.880 "zoned": false, 00:16:46.880 "supported_io_types": { 00:16:46.880 "read": true, 00:16:46.880 "write": true, 00:16:46.880 "unmap": true, 00:16:46.880 "write_zeroes": true, 00:16:46.880 "flush": true, 00:16:46.880 "reset": true, 00:16:46.880 "compare": false, 00:16:46.880 "compare_and_write": false, 00:16:46.880 "abort": true, 00:16:46.880 "nvme_admin": false, 00:16:46.880 "nvme_io": false 00:16:46.880 }, 00:16:46.880 "memory_domains": [ 00:16:46.880 { 00:16:46.880 "dma_device_id": "system", 00:16:46.880 "dma_device_type": 1 00:16:46.880 }, 00:16:46.880 { 00:16:46.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.880 "dma_device_type": 2 00:16:46.880 } 00:16:46.880 ], 00:16:46.880 "driver_specific": {} 00:16:46.880 } 00:16:46.880 ] 00:16:46.880 00:30:40 -- common/autotest_common.sh@893 -- # return 0 00:16:46.880 00:30:40 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:46.880 [2024-04-24 00:30:40.656216] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:46.880 [2024-04-24 00:30:40.658500] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.880 [2024-04-24 00:30:40.658572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.138 00:30:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.396 00:30:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.396 "name": "Existed_Raid", 00:16:47.396 "uuid": "24c3d2df-e78b-45c3-9055-ce5560ede75b", 00:16:47.396 "strip_size_kb": 64, 00:16:47.396 "state": "configuring", 00:16:47.396 "raid_level": "raid0", 00:16:47.396 "superblock": true, 00:16:47.396 "num_base_bdevs": 2, 00:16:47.396 "num_base_bdevs_discovered": 1, 00:16:47.396 "num_base_bdevs_operational": 2, 00:16:47.396 "base_bdevs_list": [ 00:16:47.396 { 00:16:47.396 "name": "BaseBdev1", 00:16:47.396 "uuid": "62bb2085-d7d9-45b3-9aa5-dd4be4d0a8d4", 00:16:47.396 "is_configured": true, 00:16:47.396 "data_offset": 2048, 00:16:47.396 "data_size": 63488 00:16:47.396 }, 00:16:47.396 { 00:16:47.396 "name": "BaseBdev2", 00:16:47.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.396 "is_configured": false, 00:16:47.396 "data_offset": 0, 00:16:47.396 "data_size": 0 00:16:47.396 } 00:16:47.396 ] 00:16:47.396 }' 00:16:47.396 00:30:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.396 00:30:40 -- common/autotest_common.sh@10 -- # set +x 00:16:47.963 00:30:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:48.221 [2024-04-24 00:30:42.011589] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.479 [2024-04-24 00:30:42.011823] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:16:48.479 [2024-04-24 00:30:42.011837] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:48.479 [2024-04-24 00:30:42.011992] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:48.479 [2024-04-24 00:30:42.012334] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:16:48.480 [2024-04-24 00:30:42.012371] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:16:48.480 [2024-04-24 00:30:42.012509] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.480 BaseBdev2 00:16:48.480 00:30:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:48.480 00:30:42 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:16:48.480 00:30:42 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:48.480 00:30:42 -- common/autotest_common.sh@887 -- # local i 00:16:48.480 00:30:42 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:48.480 00:30:42 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:48.480 00:30:42 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.480 00:30:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:48.738 [ 00:16:48.738 { 00:16:48.738 "name": "BaseBdev2", 00:16:48.738 "aliases": [ 00:16:48.738 "90a6f2a3-e83e-4dda-b421-2c32f0953cab" 00:16:48.738 ], 00:16:48.738 "product_name": "Malloc disk", 00:16:48.738 "block_size": 512, 00:16:48.738 "num_blocks": 65536, 00:16:48.738 "uuid": "90a6f2a3-e83e-4dda-b421-2c32f0953cab", 00:16:48.738 "assigned_rate_limits": { 00:16:48.738 "rw_ios_per_sec": 0, 00:16:48.738 "rw_mbytes_per_sec": 0, 00:16:48.738 "r_mbytes_per_sec": 0, 00:16:48.738 "w_mbytes_per_sec": 0 00:16:48.738 }, 00:16:48.738 "claimed": true, 00:16:48.738 "claim_type": "exclusive_write", 00:16:48.738 "zoned": false, 00:16:48.738 "supported_io_types": { 00:16:48.738 "read": true, 00:16:48.738 "write": true, 00:16:48.738 "unmap": true, 00:16:48.738 "write_zeroes": true, 00:16:48.738 "flush": true, 00:16:48.738 "reset": true, 00:16:48.738 "compare": false, 00:16:48.738 "compare_and_write": false, 00:16:48.738 "abort": true, 00:16:48.738 "nvme_admin": false, 00:16:48.738 "nvme_io": false 00:16:48.738 }, 00:16:48.738 "memory_domains": [ 00:16:48.738 { 00:16:48.738 "dma_device_id": "system", 00:16:48.738 "dma_device_type": 1 00:16:48.738 }, 00:16:48.738 { 00:16:48.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.738 "dma_device_type": 2 00:16:48.738 } 00:16:48.738 ], 00:16:48.738 "driver_specific": {} 00:16:48.738 } 00:16:48.738 ] 00:16:48.738 00:30:42 -- common/autotest_common.sh@893 -- # return 0 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.738 00:30:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.994 00:30:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:48.994 "name": "Existed_Raid", 00:16:48.994 "uuid": "24c3d2df-e78b-45c3-9055-ce5560ede75b", 00:16:48.994 "strip_size_kb": 64, 00:16:48.994 "state": "online", 00:16:48.994 "raid_level": "raid0", 00:16:48.994 "superblock": true, 00:16:48.994 "num_base_bdevs": 2, 00:16:48.994 "num_base_bdevs_discovered": 2, 00:16:48.994 "num_base_bdevs_operational": 2, 00:16:48.994 "base_bdevs_list": [ 00:16:48.994 { 00:16:48.994 "name": "BaseBdev1", 00:16:48.994 "uuid": "62bb2085-d7d9-45b3-9aa5-dd4be4d0a8d4", 00:16:48.994 "is_configured": true, 00:16:48.994 "data_offset": 2048, 00:16:48.994 "data_size": 63488 00:16:48.994 }, 00:16:48.994 { 00:16:48.994 "name": "BaseBdev2", 00:16:48.994 "uuid": "90a6f2a3-e83e-4dda-b421-2c32f0953cab", 00:16:48.994 "is_configured": true, 00:16:48.994 "data_offset": 2048, 00:16:48.994 "data_size": 63488 00:16:48.994 } 00:16:48.994 ] 00:16:48.994 }' 00:16:48.994 00:30:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:48.994 00:30:42 -- common/autotest_common.sh@10 -- # set +x 00:16:49.562 00:30:43 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:49.821 [2024-04-24 00:30:43.516004] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:49.821 [2024-04-24 00:30:43.516043] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.821 [2024-04-24 00:30:43.516097] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:50.082 "name": "Existed_Raid", 00:16:50.082 "uuid": "24c3d2df-e78b-45c3-9055-ce5560ede75b", 00:16:50.082 "strip_size_kb": 64, 00:16:50.082 "state": "offline", 00:16:50.082 "raid_level": "raid0", 00:16:50.082 "superblock": true, 00:16:50.082 "num_base_bdevs": 2, 00:16:50.082 "num_base_bdevs_discovered": 1, 00:16:50.082 "num_base_bdevs_operational": 1, 00:16:50.082 "base_bdevs_list": [ 00:16:50.082 { 00:16:50.082 "name": null, 00:16:50.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.082 "is_configured": false, 00:16:50.082 "data_offset": 2048, 00:16:50.082 "data_size": 63488 00:16:50.082 }, 00:16:50.082 { 00:16:50.082 "name": "BaseBdev2", 00:16:50.082 "uuid": "90a6f2a3-e83e-4dda-b421-2c32f0953cab", 00:16:50.082 "is_configured": true, 00:16:50.082 "data_offset": 2048, 00:16:50.082 "data_size": 63488 00:16:50.082 } 00:16:50.082 ] 00:16:50.082 }' 00:16:50.082 00:30:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:50.082 00:30:43 -- common/autotest_common.sh@10 -- # set +x 00:16:50.669 00:30:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:50.669 00:30:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:50.669 00:30:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.669 00:30:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:50.927 00:30:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:50.927 00:30:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:50.927 00:30:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:51.185 [2024-04-24 00:30:44.765392] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:51.185 [2024-04-24 00:30:44.765489] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:16:51.185 00:30:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:51.185 00:30:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:51.185 00:30:44 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.185 00:30:44 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:51.444 00:30:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:51.444 00:30:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:51.444 00:30:45 -- bdev/bdev_raid.sh@287 -- # killprocess 120651 00:16:51.444 00:30:45 -- common/autotest_common.sh@936 -- # '[' -z 120651 ']' 00:16:51.444 00:30:45 -- common/autotest_common.sh@940 -- # kill -0 120651 00:16:51.444 00:30:45 -- common/autotest_common.sh@941 -- # uname 00:16:51.444 00:30:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:51.444 00:30:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120651 00:16:51.444 killing process with pid 120651 00:16:51.444 00:30:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:51.444 00:30:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:51.444 00:30:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120651' 00:16:51.444 00:30:45 -- common/autotest_common.sh@955 -- # kill 120651 00:16:51.444 [2024-04-24 00:30:45.185948] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.444 00:30:45 -- common/autotest_common.sh@960 -- # wait 120651 00:16:51.444 [2024-04-24 00:30:45.186046] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.818 ************************************ 00:16:52.818 END TEST raid_state_function_test_sb 00:16:52.818 ************************************ 00:16:52.818 00:30:46 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:52.818 00:16:52.818 real 0m12.693s 00:16:52.818 user 0m21.611s 00:16:52.818 sys 0m1.813s 00:16:52.818 00:30:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:52.818 00:30:46 -- common/autotest_common.sh@10 -- # set +x 00:16:52.818 00:30:46 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:16:52.818 00:30:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:52.818 00:30:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:52.819 00:30:46 -- common/autotest_common.sh@10 -- # set +x 00:16:53.077 ************************************ 00:16:53.077 START TEST raid_superblock_test 00:16:53.077 ************************************ 00:16:53.077 00:30:46 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid0 2 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@357 -- # raid_pid=121006 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@358 -- # waitforlisten 121006 /var/tmp/spdk-raid.sock 00:16:53.077 00:30:46 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:53.077 00:30:46 -- common/autotest_common.sh@817 -- # '[' -z 121006 ']' 00:16:53.077 00:30:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:53.077 00:30:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:53.077 00:30:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:53.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:53.077 00:30:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:53.077 00:30:46 -- common/autotest_common.sh@10 -- # set +x 00:16:53.077 [2024-04-24 00:30:46.693661] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:16:53.077 [2024-04-24 00:30:46.693861] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121006 ] 00:16:53.337 [2024-04-24 00:30:46.872997] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.338 [2024-04-24 00:30:47.074524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.595 [2024-04-24 00:30:47.280979] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.854 00:30:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:53.854 00:30:47 -- common/autotest_common.sh@850 -- # return 0 00:16:53.854 00:30:47 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:53.854 00:30:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:53.854 00:30:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:53.854 00:30:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:53.854 00:30:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:53.854 00:30:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:53.854 00:30:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:53.854 00:30:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:53.854 00:30:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:54.113 malloc1 00:16:54.371 00:30:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:54.371 [2024-04-24 00:30:48.091266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:54.371 [2024-04-24 00:30:48.091361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.371 [2024-04-24 00:30:48.091394] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:54.371 [2024-04-24 00:30:48.091450] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.371 [2024-04-24 00:30:48.093964] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.371 [2024-04-24 00:30:48.094015] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:54.371 pt1 00:16:54.371 00:30:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:54.371 00:30:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:54.371 00:30:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:54.371 00:30:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:54.371 00:30:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:54.371 00:30:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:54.371 00:30:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:54.371 00:30:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:54.372 00:30:48 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:54.629 malloc2 00:16:54.629 00:30:48 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:54.891 [2024-04-24 00:30:48.604587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:54.891 [2024-04-24 00:30:48.604670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.891 [2024-04-24 00:30:48.604713] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:54.891 [2024-04-24 00:30:48.604763] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.891 [2024-04-24 00:30:48.607117] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.891 [2024-04-24 00:30:48.607169] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:54.891 pt2 00:16:54.891 00:30:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:54.891 00:30:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:54.891 00:30:48 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:55.154 [2024-04-24 00:30:48.828666] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:55.154 [2024-04-24 00:30:48.830663] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:55.154 [2024-04-24 00:30:48.830845] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:16:55.154 [2024-04-24 00:30:48.830857] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:55.154 [2024-04-24 00:30:48.831011] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:55.154 [2024-04-24 00:30:48.831330] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:16:55.154 [2024-04-24 00:30:48.831355] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:16:55.154 [2024-04-24 00:30:48.831510] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.154 00:30:48 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:55.154 00:30:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:55.154 00:30:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:55.154 00:30:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:55.154 00:30:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:55.154 00:30:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:55.154 00:30:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.154 00:30:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.154 00:30:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.154 00:30:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.154 00:30:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.154 00:30:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.413 00:30:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.413 "name": "raid_bdev1", 00:16:55.413 "uuid": "f2f3755c-ede0-4952-95c1-193af8e9aacd", 00:16:55.413 "strip_size_kb": 64, 00:16:55.413 "state": "online", 00:16:55.413 "raid_level": "raid0", 00:16:55.413 "superblock": true, 00:16:55.413 "num_base_bdevs": 2, 00:16:55.413 "num_base_bdevs_discovered": 2, 00:16:55.413 "num_base_bdevs_operational": 2, 00:16:55.413 "base_bdevs_list": [ 00:16:55.413 { 00:16:55.413 "name": "pt1", 00:16:55.413 "uuid": "513c5feb-713b-5b8a-8591-b03e592a5eb3", 00:16:55.413 "is_configured": true, 00:16:55.413 "data_offset": 2048, 00:16:55.413 "data_size": 63488 00:16:55.413 }, 00:16:55.413 { 00:16:55.413 "name": "pt2", 00:16:55.413 "uuid": "0c303390-be5a-5ece-bfa7-ead536fd079d", 00:16:55.413 "is_configured": true, 00:16:55.413 "data_offset": 2048, 00:16:55.413 "data_size": 63488 00:16:55.413 } 00:16:55.413 ] 00:16:55.413 }' 00:16:55.413 00:30:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.413 00:30:49 -- common/autotest_common.sh@10 -- # set +x 00:16:55.980 00:30:49 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:55.980 00:30:49 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:56.238 [2024-04-24 00:30:49.837047] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.238 00:30:49 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f2f3755c-ede0-4952-95c1-193af8e9aacd 00:16:56.238 00:30:49 -- bdev/bdev_raid.sh@380 -- # '[' -z f2f3755c-ede0-4952-95c1-193af8e9aacd ']' 00:16:56.238 00:30:49 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:56.496 [2024-04-24 00:30:50.120859] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:56.496 [2024-04-24 00:30:50.120901] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.496 [2024-04-24 00:30:50.120999] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.496 [2024-04-24 00:30:50.121047] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.496 [2024-04-24 00:30:50.121056] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:16:56.496 00:30:50 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:56.496 00:30:50 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.754 00:30:50 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:56.754 00:30:50 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:56.754 00:30:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:56.754 00:30:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:57.012 00:30:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:57.012 00:30:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:57.328 00:30:50 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:57.328 00:30:50 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:57.585 00:30:51 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:57.585 00:30:51 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:57.585 00:30:51 -- common/autotest_common.sh@638 -- # local es=0 00:16:57.585 00:30:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:57.585 00:30:51 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.585 00:30:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:57.586 00:30:51 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.586 00:30:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:57.586 00:30:51 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.586 00:30:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:57.586 00:30:51 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.586 00:30:51 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:57.586 00:30:51 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:16:57.844 [2024-04-24 00:30:51.505135] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:57.844 [2024-04-24 00:30:51.507170] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:57.844 [2024-04-24 00:30:51.507231] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:57.844 [2024-04-24 00:30:51.507295] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:57.844 [2024-04-24 00:30:51.507326] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.844 [2024-04-24 00:30:51.507335] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:16:57.844 request: 00:16:57.844 { 00:16:57.844 "name": "raid_bdev1", 00:16:57.844 "raid_level": "raid0", 00:16:57.844 "base_bdevs": [ 00:16:57.844 "malloc1", 00:16:57.844 "malloc2" 00:16:57.844 ], 00:16:57.844 "superblock": false, 00:16:57.844 "strip_size_kb": 64, 00:16:57.844 "method": "bdev_raid_create", 00:16:57.844 "req_id": 1 00:16:57.844 } 00:16:57.844 Got JSON-RPC error response 00:16:57.844 response: 00:16:57.844 { 00:16:57.844 "code": -17, 00:16:57.844 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:57.844 } 00:16:57.844 00:30:51 -- common/autotest_common.sh@641 -- # es=1 00:16:57.844 00:30:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:57.844 00:30:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:57.844 00:30:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:57.844 00:30:51 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.844 00:30:51 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:58.102 00:30:51 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:58.102 00:30:51 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:58.102 00:30:51 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:58.362 [2024-04-24 00:30:51.977174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:58.362 [2024-04-24 00:30:51.977281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.362 [2024-04-24 00:30:51.977319] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:58.362 [2024-04-24 00:30:51.977345] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.362 [2024-04-24 00:30:51.979565] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.362 [2024-04-24 00:30:51.979617] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:58.362 [2024-04-24 00:30:51.979733] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:58.362 [2024-04-24 00:30:51.979778] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:58.362 pt1 00:16:58.362 00:30:51 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:16:58.362 00:30:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:58.362 00:30:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:58.362 00:30:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:58.362 00:30:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:58.362 00:30:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:58.362 00:30:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.362 00:30:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.362 00:30:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.362 00:30:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.362 00:30:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.362 00:30:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.620 00:30:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.620 "name": "raid_bdev1", 00:16:58.620 "uuid": "f2f3755c-ede0-4952-95c1-193af8e9aacd", 00:16:58.620 "strip_size_kb": 64, 00:16:58.620 "state": "configuring", 00:16:58.620 "raid_level": "raid0", 00:16:58.620 "superblock": true, 00:16:58.620 "num_base_bdevs": 2, 00:16:58.620 "num_base_bdevs_discovered": 1, 00:16:58.620 "num_base_bdevs_operational": 2, 00:16:58.620 "base_bdevs_list": [ 00:16:58.620 { 00:16:58.620 "name": "pt1", 00:16:58.620 "uuid": "513c5feb-713b-5b8a-8591-b03e592a5eb3", 00:16:58.620 "is_configured": true, 00:16:58.620 "data_offset": 2048, 00:16:58.620 "data_size": 63488 00:16:58.620 }, 00:16:58.620 { 00:16:58.620 "name": null, 00:16:58.620 "uuid": "0c303390-be5a-5ece-bfa7-ead536fd079d", 00:16:58.620 "is_configured": false, 00:16:58.620 "data_offset": 2048, 00:16:58.620 "data_size": 63488 00:16:58.620 } 00:16:58.620 ] 00:16:58.620 }' 00:16:58.620 00:30:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.620 00:30:52 -- common/autotest_common.sh@10 -- # set +x 00:16:59.187 00:30:52 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:59.187 00:30:52 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:59.187 00:30:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:59.187 00:30:52 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.475 [2024-04-24 00:30:53.041395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.475 [2024-04-24 00:30:53.041524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.475 [2024-04-24 00:30:53.041566] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:59.475 [2024-04-24 00:30:53.041592] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.475 [2024-04-24 00:30:53.042044] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.475 [2024-04-24 00:30:53.042083] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.475 [2024-04-24 00:30:53.042187] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:59.475 [2024-04-24 00:30:53.042207] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.475 [2024-04-24 00:30:53.042308] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:16:59.475 [2024-04-24 00:30:53.042317] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:59.475 [2024-04-24 00:30:53.042431] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:59.475 [2024-04-24 00:30:53.042751] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:16:59.475 [2024-04-24 00:30:53.042773] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:16:59.476 [2024-04-24 00:30:53.042913] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.476 pt2 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.476 00:30:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.733 00:30:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:59.733 "name": "raid_bdev1", 00:16:59.733 "uuid": "f2f3755c-ede0-4952-95c1-193af8e9aacd", 00:16:59.733 "strip_size_kb": 64, 00:16:59.733 "state": "online", 00:16:59.733 "raid_level": "raid0", 00:16:59.733 "superblock": true, 00:16:59.733 "num_base_bdevs": 2, 00:16:59.733 "num_base_bdevs_discovered": 2, 00:16:59.733 "num_base_bdevs_operational": 2, 00:16:59.733 "base_bdevs_list": [ 00:16:59.733 { 00:16:59.733 "name": "pt1", 00:16:59.733 "uuid": "513c5feb-713b-5b8a-8591-b03e592a5eb3", 00:16:59.733 "is_configured": true, 00:16:59.733 "data_offset": 2048, 00:16:59.733 "data_size": 63488 00:16:59.733 }, 00:16:59.733 { 00:16:59.733 "name": "pt2", 00:16:59.733 "uuid": "0c303390-be5a-5ece-bfa7-ead536fd079d", 00:16:59.733 "is_configured": true, 00:16:59.733 "data_offset": 2048, 00:16:59.733 "data_size": 63488 00:16:59.733 } 00:16:59.733 ] 00:16:59.733 }' 00:16:59.733 00:30:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:59.733 00:30:53 -- common/autotest_common.sh@10 -- # set +x 00:17:00.300 00:30:53 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:00.300 00:30:53 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:00.559 [2024-04-24 00:30:54.129816] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.559 00:30:54 -- bdev/bdev_raid.sh@430 -- # '[' f2f3755c-ede0-4952-95c1-193af8e9aacd '!=' f2f3755c-ede0-4952-95c1-193af8e9aacd ']' 00:17:00.559 00:30:54 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:17:00.559 00:30:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:00.559 00:30:54 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:00.559 00:30:54 -- bdev/bdev_raid.sh@511 -- # killprocess 121006 00:17:00.559 00:30:54 -- common/autotest_common.sh@936 -- # '[' -z 121006 ']' 00:17:00.559 00:30:54 -- common/autotest_common.sh@940 -- # kill -0 121006 00:17:00.559 00:30:54 -- common/autotest_common.sh@941 -- # uname 00:17:00.559 00:30:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:00.559 00:30:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121006 00:17:00.559 killing process with pid 121006 00:17:00.559 00:30:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:00.559 00:30:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:00.559 00:30:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121006' 00:17:00.559 00:30:54 -- common/autotest_common.sh@955 -- # kill 121006 00:17:00.559 [2024-04-24 00:30:54.183288] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.559 00:30:54 -- common/autotest_common.sh@960 -- # wait 121006 00:17:00.559 [2024-04-24 00:30:54.183358] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.559 [2024-04-24 00:30:54.183405] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.559 [2024-04-24 00:30:54.183415] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:17:00.818 [2024-04-24 00:30:54.395808] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.196 ************************************ 00:17:02.196 END TEST raid_superblock_test 00:17:02.196 ************************************ 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:02.196 00:17:02.196 real 0m9.141s 00:17:02.196 user 0m15.060s 00:17:02.196 sys 0m1.372s 00:17:02.196 00:30:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:02.196 00:30:55 -- common/autotest_common.sh@10 -- # set +x 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:17:02.196 00:30:55 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:02.196 00:30:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:02.196 00:30:55 -- common/autotest_common.sh@10 -- # set +x 00:17:02.196 ************************************ 00:17:02.196 START TEST raid_state_function_test 00:17:02.196 ************************************ 00:17:02.196 00:30:55 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 2 false 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@226 -- # raid_pid=121274 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121274' 00:17:02.196 Process raid pid: 121274 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121274 /var/tmp/spdk-raid.sock 00:17:02.196 00:30:55 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:02.196 00:30:55 -- common/autotest_common.sh@817 -- # '[' -z 121274 ']' 00:17:02.196 00:30:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:02.196 00:30:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:02.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:02.196 00:30:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:02.196 00:30:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:02.196 00:30:55 -- common/autotest_common.sh@10 -- # set +x 00:17:02.196 [2024-04-24 00:30:55.935184] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:17:02.196 [2024-04-24 00:30:55.935326] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.454 [2024-04-24 00:30:56.100999] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.714 [2024-04-24 00:30:56.307964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.972 [2024-04-24 00:30:56.523412] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.231 00:30:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:03.231 00:30:56 -- common/autotest_common.sh@850 -- # return 0 00:17:03.231 00:30:56 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:03.490 [2024-04-24 00:30:57.095590] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.490 [2024-04-24 00:30:57.095673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.490 [2024-04-24 00:30:57.095685] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.490 [2024-04-24 00:30:57.095702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.490 00:30:57 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:03.490 00:30:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:03.490 00:30:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:03.490 00:30:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:03.490 00:30:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:03.490 00:30:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:03.490 00:30:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:03.490 00:30:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:03.490 00:30:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:03.490 00:30:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:03.490 00:30:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.490 00:30:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.748 00:30:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.748 "name": "Existed_Raid", 00:17:03.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.748 "strip_size_kb": 64, 00:17:03.748 "state": "configuring", 00:17:03.748 "raid_level": "concat", 00:17:03.748 "superblock": false, 00:17:03.748 "num_base_bdevs": 2, 00:17:03.748 "num_base_bdevs_discovered": 0, 00:17:03.748 "num_base_bdevs_operational": 2, 00:17:03.748 "base_bdevs_list": [ 00:17:03.748 { 00:17:03.748 "name": "BaseBdev1", 00:17:03.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.748 "is_configured": false, 00:17:03.748 "data_offset": 0, 00:17:03.748 "data_size": 0 00:17:03.748 }, 00:17:03.748 { 00:17:03.748 "name": "BaseBdev2", 00:17:03.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.748 "is_configured": false, 00:17:03.748 "data_offset": 0, 00:17:03.748 "data_size": 0 00:17:03.748 } 00:17:03.748 ] 00:17:03.748 }' 00:17:03.748 00:30:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.748 00:30:57 -- common/autotest_common.sh@10 -- # set +x 00:17:04.315 00:30:57 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:04.574 [2024-04-24 00:30:58.155674] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:04.574 [2024-04-24 00:30:58.155730] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:17:04.574 00:30:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:04.834 [2024-04-24 00:30:58.475720] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:04.834 [2024-04-24 00:30:58.475811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:04.834 [2024-04-24 00:30:58.475823] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:04.834 [2024-04-24 00:30:58.475869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:04.834 00:30:58 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:05.093 [2024-04-24 00:30:58.780873] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.093 BaseBdev1 00:17:05.093 00:30:58 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:05.093 00:30:58 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:05.093 00:30:58 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:05.093 00:30:58 -- common/autotest_common.sh@887 -- # local i 00:17:05.093 00:30:58 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:05.093 00:30:58 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:05.093 00:30:58 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:05.352 00:30:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:05.611 [ 00:17:05.611 { 00:17:05.611 "name": "BaseBdev1", 00:17:05.611 "aliases": [ 00:17:05.611 "09693dfa-4a56-41cf-b90b-4c8fea4bf40f" 00:17:05.611 ], 00:17:05.611 "product_name": "Malloc disk", 00:17:05.611 "block_size": 512, 00:17:05.611 "num_blocks": 65536, 00:17:05.611 "uuid": "09693dfa-4a56-41cf-b90b-4c8fea4bf40f", 00:17:05.611 "assigned_rate_limits": { 00:17:05.611 "rw_ios_per_sec": 0, 00:17:05.611 "rw_mbytes_per_sec": 0, 00:17:05.611 "r_mbytes_per_sec": 0, 00:17:05.611 "w_mbytes_per_sec": 0 00:17:05.611 }, 00:17:05.611 "claimed": true, 00:17:05.611 "claim_type": "exclusive_write", 00:17:05.611 "zoned": false, 00:17:05.611 "supported_io_types": { 00:17:05.611 "read": true, 00:17:05.611 "write": true, 00:17:05.611 "unmap": true, 00:17:05.611 "write_zeroes": true, 00:17:05.611 "flush": true, 00:17:05.611 "reset": true, 00:17:05.611 "compare": false, 00:17:05.611 "compare_and_write": false, 00:17:05.611 "abort": true, 00:17:05.611 "nvme_admin": false, 00:17:05.611 "nvme_io": false 00:17:05.611 }, 00:17:05.611 "memory_domains": [ 00:17:05.611 { 00:17:05.611 "dma_device_id": "system", 00:17:05.611 "dma_device_type": 1 00:17:05.611 }, 00:17:05.611 { 00:17:05.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.611 "dma_device_type": 2 00:17:05.611 } 00:17:05.611 ], 00:17:05.611 "driver_specific": {} 00:17:05.611 } 00:17:05.611 ] 00:17:05.611 00:30:59 -- common/autotest_common.sh@893 -- # return 0 00:17:05.611 00:30:59 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:05.611 00:30:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:05.611 00:30:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:05.611 00:30:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:05.611 00:30:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:05.611 00:30:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:05.611 00:30:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.611 00:30:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.611 00:30:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.611 00:30:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.611 00:30:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.611 00:30:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.869 00:30:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.869 "name": "Existed_Raid", 00:17:05.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.869 "strip_size_kb": 64, 00:17:05.869 "state": "configuring", 00:17:05.869 "raid_level": "concat", 00:17:05.869 "superblock": false, 00:17:05.869 "num_base_bdevs": 2, 00:17:05.869 "num_base_bdevs_discovered": 1, 00:17:05.869 "num_base_bdevs_operational": 2, 00:17:05.869 "base_bdevs_list": [ 00:17:05.869 { 00:17:05.869 "name": "BaseBdev1", 00:17:05.869 "uuid": "09693dfa-4a56-41cf-b90b-4c8fea4bf40f", 00:17:05.869 "is_configured": true, 00:17:05.869 "data_offset": 0, 00:17:05.869 "data_size": 65536 00:17:05.869 }, 00:17:05.869 { 00:17:05.869 "name": "BaseBdev2", 00:17:05.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.869 "is_configured": false, 00:17:05.869 "data_offset": 0, 00:17:05.869 "data_size": 0 00:17:05.869 } 00:17:05.869 ] 00:17:05.869 }' 00:17:05.869 00:30:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.869 00:30:59 -- common/autotest_common.sh@10 -- # set +x 00:17:06.804 00:31:00 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:06.804 [2024-04-24 00:31:00.461291] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:06.804 [2024-04-24 00:31:00.461362] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:17:06.804 00:31:00 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:06.804 00:31:00 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:07.063 [2024-04-24 00:31:00.729342] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.063 [2024-04-24 00:31:00.731304] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:07.063 [2024-04-24 00:31:00.731360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.063 00:31:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.321 00:31:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.321 "name": "Existed_Raid", 00:17:07.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.321 "strip_size_kb": 64, 00:17:07.321 "state": "configuring", 00:17:07.321 "raid_level": "concat", 00:17:07.321 "superblock": false, 00:17:07.321 "num_base_bdevs": 2, 00:17:07.321 "num_base_bdevs_discovered": 1, 00:17:07.321 "num_base_bdevs_operational": 2, 00:17:07.321 "base_bdevs_list": [ 00:17:07.321 { 00:17:07.321 "name": "BaseBdev1", 00:17:07.321 "uuid": "09693dfa-4a56-41cf-b90b-4c8fea4bf40f", 00:17:07.321 "is_configured": true, 00:17:07.321 "data_offset": 0, 00:17:07.321 "data_size": 65536 00:17:07.321 }, 00:17:07.321 { 00:17:07.321 "name": "BaseBdev2", 00:17:07.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.321 "is_configured": false, 00:17:07.321 "data_offset": 0, 00:17:07.321 "data_size": 0 00:17:07.321 } 00:17:07.321 ] 00:17:07.321 }' 00:17:07.321 00:31:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.321 00:31:01 -- common/autotest_common.sh@10 -- # set +x 00:17:07.889 00:31:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:08.456 [2024-04-24 00:31:01.953419] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.456 [2024-04-24 00:31:01.953488] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:17:08.456 [2024-04-24 00:31:01.953498] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:08.456 [2024-04-24 00:31:01.953612] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:08.456 [2024-04-24 00:31:01.953940] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:17:08.456 [2024-04-24 00:31:01.953962] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:17:08.456 [2024-04-24 00:31:01.954229] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.456 BaseBdev2 00:17:08.456 00:31:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:08.456 00:31:01 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:17:08.456 00:31:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:08.456 00:31:01 -- common/autotest_common.sh@887 -- # local i 00:17:08.456 00:31:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:08.456 00:31:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:08.456 00:31:01 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:08.456 00:31:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:08.714 [ 00:17:08.714 { 00:17:08.714 "name": "BaseBdev2", 00:17:08.714 "aliases": [ 00:17:08.714 "b993c2a0-b56b-4a52-93f1-00f1284277a1" 00:17:08.714 ], 00:17:08.714 "product_name": "Malloc disk", 00:17:08.714 "block_size": 512, 00:17:08.714 "num_blocks": 65536, 00:17:08.714 "uuid": "b993c2a0-b56b-4a52-93f1-00f1284277a1", 00:17:08.714 "assigned_rate_limits": { 00:17:08.714 "rw_ios_per_sec": 0, 00:17:08.714 "rw_mbytes_per_sec": 0, 00:17:08.714 "r_mbytes_per_sec": 0, 00:17:08.714 "w_mbytes_per_sec": 0 00:17:08.714 }, 00:17:08.714 "claimed": true, 00:17:08.714 "claim_type": "exclusive_write", 00:17:08.714 "zoned": false, 00:17:08.714 "supported_io_types": { 00:17:08.714 "read": true, 00:17:08.714 "write": true, 00:17:08.714 "unmap": true, 00:17:08.714 "write_zeroes": true, 00:17:08.714 "flush": true, 00:17:08.714 "reset": true, 00:17:08.714 "compare": false, 00:17:08.714 "compare_and_write": false, 00:17:08.714 "abort": true, 00:17:08.714 "nvme_admin": false, 00:17:08.714 "nvme_io": false 00:17:08.714 }, 00:17:08.714 "memory_domains": [ 00:17:08.714 { 00:17:08.714 "dma_device_id": "system", 00:17:08.714 "dma_device_type": 1 00:17:08.714 }, 00:17:08.714 { 00:17:08.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.714 "dma_device_type": 2 00:17:08.714 } 00:17:08.714 ], 00:17:08.714 "driver_specific": {} 00:17:08.714 } 00:17:08.714 ] 00:17:08.714 00:31:02 -- common/autotest_common.sh@893 -- # return 0 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.714 00:31:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.974 00:31:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.974 "name": "Existed_Raid", 00:17:08.974 "uuid": "551f4033-008f-4a53-b507-f60a163776af", 00:17:08.974 "strip_size_kb": 64, 00:17:08.974 "state": "online", 00:17:08.974 "raid_level": "concat", 00:17:08.974 "superblock": false, 00:17:08.974 "num_base_bdevs": 2, 00:17:08.974 "num_base_bdevs_discovered": 2, 00:17:08.974 "num_base_bdevs_operational": 2, 00:17:08.974 "base_bdevs_list": [ 00:17:08.974 { 00:17:08.974 "name": "BaseBdev1", 00:17:08.974 "uuid": "09693dfa-4a56-41cf-b90b-4c8fea4bf40f", 00:17:08.974 "is_configured": true, 00:17:08.974 "data_offset": 0, 00:17:08.974 "data_size": 65536 00:17:08.974 }, 00:17:08.974 { 00:17:08.974 "name": "BaseBdev2", 00:17:08.974 "uuid": "b993c2a0-b56b-4a52-93f1-00f1284277a1", 00:17:08.974 "is_configured": true, 00:17:08.974 "data_offset": 0, 00:17:08.974 "data_size": 65536 00:17:08.974 } 00:17:08.974 ] 00:17:08.974 }' 00:17:08.974 00:31:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.974 00:31:02 -- common/autotest_common.sh@10 -- # set +x 00:17:09.908 00:31:03 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:09.908 [2024-04-24 00:31:03.671713] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:09.908 [2024-04-24 00:31:03.671755] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.908 [2024-04-24 00:31:03.671832] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.166 00:31:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.423 00:31:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.423 "name": "Existed_Raid", 00:17:10.423 "uuid": "551f4033-008f-4a53-b507-f60a163776af", 00:17:10.423 "strip_size_kb": 64, 00:17:10.423 "state": "offline", 00:17:10.423 "raid_level": "concat", 00:17:10.423 "superblock": false, 00:17:10.423 "num_base_bdevs": 2, 00:17:10.423 "num_base_bdevs_discovered": 1, 00:17:10.423 "num_base_bdevs_operational": 1, 00:17:10.423 "base_bdevs_list": [ 00:17:10.423 { 00:17:10.423 "name": null, 00:17:10.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.423 "is_configured": false, 00:17:10.423 "data_offset": 0, 00:17:10.423 "data_size": 65536 00:17:10.423 }, 00:17:10.423 { 00:17:10.423 "name": "BaseBdev2", 00:17:10.423 "uuid": "b993c2a0-b56b-4a52-93f1-00f1284277a1", 00:17:10.423 "is_configured": true, 00:17:10.423 "data_offset": 0, 00:17:10.423 "data_size": 65536 00:17:10.423 } 00:17:10.423 ] 00:17:10.423 }' 00:17:10.423 00:31:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.423 00:31:04 -- common/autotest_common.sh@10 -- # set +x 00:17:10.988 00:31:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:10.988 00:31:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:10.988 00:31:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.988 00:31:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:11.245 00:31:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:11.246 00:31:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:11.246 00:31:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:11.503 [2024-04-24 00:31:05.161658] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:11.503 [2024-04-24 00:31:05.161936] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:17:11.759 00:31:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:11.759 00:31:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:11.759 00:31:05 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.759 00:31:05 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:12.017 00:31:05 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:12.017 00:31:05 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:12.017 00:31:05 -- bdev/bdev_raid.sh@287 -- # killprocess 121274 00:17:12.017 00:31:05 -- common/autotest_common.sh@936 -- # '[' -z 121274 ']' 00:17:12.017 00:31:05 -- common/autotest_common.sh@940 -- # kill -0 121274 00:17:12.017 00:31:05 -- common/autotest_common.sh@941 -- # uname 00:17:12.017 00:31:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:12.017 00:31:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121274 00:17:12.017 killing process with pid 121274 00:17:12.017 00:31:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:12.017 00:31:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:12.017 00:31:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121274' 00:17:12.017 00:31:05 -- common/autotest_common.sh@955 -- # kill 121274 00:17:12.017 00:31:05 -- common/autotest_common.sh@960 -- # wait 121274 00:17:12.017 [2024-04-24 00:31:05.605445] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:12.017 [2024-04-24 00:31:05.605582] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.392 ************************************ 00:17:13.392 END TEST raid_state_function_test 00:17:13.392 ************************************ 00:17:13.392 00:31:07 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:13.392 00:17:13.392 real 0m11.148s 00:17:13.392 user 0m18.670s 00:17:13.392 sys 0m1.636s 00:17:13.392 00:31:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:13.392 00:31:07 -- common/autotest_common.sh@10 -- # set +x 00:17:13.392 00:31:07 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:17:13.392 00:31:07 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:13.392 00:31:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:13.392 00:31:07 -- common/autotest_common.sh@10 -- # set +x 00:17:13.392 ************************************ 00:17:13.392 START TEST raid_state_function_test_sb 00:17:13.392 ************************************ 00:17:13.392 00:31:07 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 2 true 00:17:13.392 00:31:07 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@226 -- # raid_pid=121606 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121606' 00:17:13.393 Process raid pid: 121606 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:13.393 00:31:07 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121606 /var/tmp/spdk-raid.sock 00:17:13.393 00:31:07 -- common/autotest_common.sh@817 -- # '[' -z 121606 ']' 00:17:13.393 00:31:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:13.393 00:31:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:13.393 00:31:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:13.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:13.393 00:31:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:13.393 00:31:07 -- common/autotest_common.sh@10 -- # set +x 00:17:13.652 [2024-04-24 00:31:07.189913] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:17:13.652 [2024-04-24 00:31:07.190229] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.652 [2024-04-24 00:31:07.366352] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.910 [2024-04-24 00:31:07.641618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.168 [2024-04-24 00:31:07.900692] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.426 00:31:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:14.426 00:31:08 -- common/autotest_common.sh@850 -- # return 0 00:17:14.426 00:31:08 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:14.683 [2024-04-24 00:31:08.364102] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:14.683 [2024-04-24 00:31:08.364383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:14.683 [2024-04-24 00:31:08.364531] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:14.683 [2024-04-24 00:31:08.364594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:14.683 00:31:08 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:14.683 00:31:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.683 00:31:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:14.683 00:31:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:14.683 00:31:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.683 00:31:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:14.683 00:31:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.683 00:31:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.683 00:31:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.683 00:31:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.683 00:31:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.683 00:31:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.941 00:31:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.941 "name": "Existed_Raid", 00:17:14.941 "uuid": "72200ff3-e116-4219-bbcb-89e095145bbc", 00:17:14.941 "strip_size_kb": 64, 00:17:14.941 "state": "configuring", 00:17:14.941 "raid_level": "concat", 00:17:14.941 "superblock": true, 00:17:14.941 "num_base_bdevs": 2, 00:17:14.941 "num_base_bdevs_discovered": 0, 00:17:14.941 "num_base_bdevs_operational": 2, 00:17:14.941 "base_bdevs_list": [ 00:17:14.941 { 00:17:14.941 "name": "BaseBdev1", 00:17:14.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.941 "is_configured": false, 00:17:14.941 "data_offset": 0, 00:17:14.941 "data_size": 0 00:17:14.941 }, 00:17:14.941 { 00:17:14.941 "name": "BaseBdev2", 00:17:14.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.941 "is_configured": false, 00:17:14.941 "data_offset": 0, 00:17:14.941 "data_size": 0 00:17:14.941 } 00:17:14.941 ] 00:17:14.941 }' 00:17:14.941 00:31:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.941 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:17:15.508 00:31:09 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:16.075 [2024-04-24 00:31:09.576212] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:16.075 [2024-04-24 00:31:09.576457] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:17:16.075 00:31:09 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:16.075 [2024-04-24 00:31:09.860277] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:16.075 [2024-04-24 00:31:09.860578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:16.075 [2024-04-24 00:31:09.860682] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:16.075 [2024-04-24 00:31:09.860751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:16.333 00:31:09 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:16.333 [2024-04-24 00:31:10.099019] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:16.333 BaseBdev1 00:17:16.333 00:31:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:16.333 00:31:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:16.333 00:31:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:16.333 00:31:10 -- common/autotest_common.sh@887 -- # local i 00:17:16.333 00:31:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:16.333 00:31:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:16.333 00:31:10 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:16.591 00:31:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:17.167 [ 00:17:17.167 { 00:17:17.167 "name": "BaseBdev1", 00:17:17.167 "aliases": [ 00:17:17.167 "369792a0-7cca-4979-aa75-106ae2d9fe9a" 00:17:17.167 ], 00:17:17.167 "product_name": "Malloc disk", 00:17:17.167 "block_size": 512, 00:17:17.167 "num_blocks": 65536, 00:17:17.167 "uuid": "369792a0-7cca-4979-aa75-106ae2d9fe9a", 00:17:17.167 "assigned_rate_limits": { 00:17:17.167 "rw_ios_per_sec": 0, 00:17:17.167 "rw_mbytes_per_sec": 0, 00:17:17.167 "r_mbytes_per_sec": 0, 00:17:17.167 "w_mbytes_per_sec": 0 00:17:17.167 }, 00:17:17.167 "claimed": true, 00:17:17.167 "claim_type": "exclusive_write", 00:17:17.167 "zoned": false, 00:17:17.167 "supported_io_types": { 00:17:17.167 "read": true, 00:17:17.167 "write": true, 00:17:17.167 "unmap": true, 00:17:17.167 "write_zeroes": true, 00:17:17.167 "flush": true, 00:17:17.167 "reset": true, 00:17:17.167 "compare": false, 00:17:17.167 "compare_and_write": false, 00:17:17.167 "abort": true, 00:17:17.167 "nvme_admin": false, 00:17:17.167 "nvme_io": false 00:17:17.167 }, 00:17:17.167 "memory_domains": [ 00:17:17.167 { 00:17:17.167 "dma_device_id": "system", 00:17:17.167 "dma_device_type": 1 00:17:17.167 }, 00:17:17.167 { 00:17:17.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.167 "dma_device_type": 2 00:17:17.167 } 00:17:17.167 ], 00:17:17.167 "driver_specific": {} 00:17:17.167 } 00:17:17.167 ] 00:17:17.167 00:31:10 -- common/autotest_common.sh@893 -- # return 0 00:17:17.167 00:31:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:17.167 00:31:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:17.167 00:31:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:17.167 00:31:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:17.167 00:31:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:17.167 00:31:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:17.167 00:31:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:17.167 00:31:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:17.167 00:31:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:17.167 00:31:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:17.167 00:31:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.167 00:31:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.167 00:31:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.167 "name": "Existed_Raid", 00:17:17.167 "uuid": "a7be5628-d50c-4817-81eb-e6cb9ead2aef", 00:17:17.167 "strip_size_kb": 64, 00:17:17.167 "state": "configuring", 00:17:17.167 "raid_level": "concat", 00:17:17.167 "superblock": true, 00:17:17.167 "num_base_bdevs": 2, 00:17:17.167 "num_base_bdevs_discovered": 1, 00:17:17.167 "num_base_bdevs_operational": 2, 00:17:17.167 "base_bdevs_list": [ 00:17:17.167 { 00:17:17.168 "name": "BaseBdev1", 00:17:17.168 "uuid": "369792a0-7cca-4979-aa75-106ae2d9fe9a", 00:17:17.168 "is_configured": true, 00:17:17.168 "data_offset": 2048, 00:17:17.168 "data_size": 63488 00:17:17.168 }, 00:17:17.168 { 00:17:17.168 "name": "BaseBdev2", 00:17:17.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.168 "is_configured": false, 00:17:17.168 "data_offset": 0, 00:17:17.168 "data_size": 0 00:17:17.168 } 00:17:17.168 ] 00:17:17.168 }' 00:17:17.168 00:31:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.168 00:31:10 -- common/autotest_common.sh@10 -- # set +x 00:17:18.121 00:31:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:18.121 [2024-04-24 00:31:11.815511] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:18.121 [2024-04-24 00:31:11.816439] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:17:18.121 00:31:11 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:18.121 00:31:11 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:18.379 00:31:12 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:18.947 BaseBdev1 00:17:18.947 00:31:12 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:18.947 00:31:12 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:18.947 00:31:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:18.947 00:31:12 -- common/autotest_common.sh@887 -- # local i 00:17:18.947 00:31:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:18.947 00:31:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:18.947 00:31:12 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:19.254 00:31:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:19.254 [ 00:17:19.254 { 00:17:19.254 "name": "BaseBdev1", 00:17:19.254 "aliases": [ 00:17:19.254 "d7bf9608-062d-4744-bb92-21aa06ce8aed" 00:17:19.254 ], 00:17:19.254 "product_name": "Malloc disk", 00:17:19.254 "block_size": 512, 00:17:19.254 "num_blocks": 65536, 00:17:19.254 "uuid": "d7bf9608-062d-4744-bb92-21aa06ce8aed", 00:17:19.254 "assigned_rate_limits": { 00:17:19.254 "rw_ios_per_sec": 0, 00:17:19.254 "rw_mbytes_per_sec": 0, 00:17:19.254 "r_mbytes_per_sec": 0, 00:17:19.254 "w_mbytes_per_sec": 0 00:17:19.254 }, 00:17:19.254 "claimed": false, 00:17:19.254 "zoned": false, 00:17:19.254 "supported_io_types": { 00:17:19.254 "read": true, 00:17:19.254 "write": true, 00:17:19.254 "unmap": true, 00:17:19.254 "write_zeroes": true, 00:17:19.254 "flush": true, 00:17:19.254 "reset": true, 00:17:19.254 "compare": false, 00:17:19.254 "compare_and_write": false, 00:17:19.254 "abort": true, 00:17:19.254 "nvme_admin": false, 00:17:19.254 "nvme_io": false 00:17:19.254 }, 00:17:19.254 "memory_domains": [ 00:17:19.254 { 00:17:19.254 "dma_device_id": "system", 00:17:19.254 "dma_device_type": 1 00:17:19.254 }, 00:17:19.254 { 00:17:19.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.254 "dma_device_type": 2 00:17:19.254 } 00:17:19.254 ], 00:17:19.254 "driver_specific": {} 00:17:19.254 } 00:17:19.254 ] 00:17:19.254 00:31:12 -- common/autotest_common.sh@893 -- # return 0 00:17:19.254 00:31:12 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:19.512 [2024-04-24 00:31:13.160029] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.512 [2024-04-24 00:31:13.162746] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.512 [2024-04-24 00:31:13.162965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.512 00:31:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:19.513 00:31:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:19.513 00:31:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:17:19.513 00:31:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:19.513 00:31:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:19.513 00:31:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:19.513 00:31:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:19.513 00:31:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:19.513 00:31:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.513 00:31:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.513 00:31:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.513 00:31:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.513 00:31:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.513 00:31:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.770 00:31:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.770 "name": "Existed_Raid", 00:17:19.770 "uuid": "b8b7eabd-8fe7-4a85-8b15-1c576faa9909", 00:17:19.770 "strip_size_kb": 64, 00:17:19.770 "state": "configuring", 00:17:19.770 "raid_level": "concat", 00:17:19.770 "superblock": true, 00:17:19.770 "num_base_bdevs": 2, 00:17:19.770 "num_base_bdevs_discovered": 1, 00:17:19.770 "num_base_bdevs_operational": 2, 00:17:19.770 "base_bdevs_list": [ 00:17:19.770 { 00:17:19.771 "name": "BaseBdev1", 00:17:19.771 "uuid": "d7bf9608-062d-4744-bb92-21aa06ce8aed", 00:17:19.771 "is_configured": true, 00:17:19.771 "data_offset": 2048, 00:17:19.771 "data_size": 63488 00:17:19.771 }, 00:17:19.771 { 00:17:19.771 "name": "BaseBdev2", 00:17:19.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.771 "is_configured": false, 00:17:19.771 "data_offset": 0, 00:17:19.771 "data_size": 0 00:17:19.771 } 00:17:19.771 ] 00:17:19.771 }' 00:17:19.771 00:31:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.771 00:31:13 -- common/autotest_common.sh@10 -- # set +x 00:17:20.337 00:31:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:20.905 [2024-04-24 00:31:14.393355] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.905 [2024-04-24 00:31:14.393839] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:17:20.905 [2024-04-24 00:31:14.393965] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:20.905 [2024-04-24 00:31:14.394124] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:20.905 [2024-04-24 00:31:14.394482] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:17:20.905 [2024-04-24 00:31:14.394521] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:17:20.905 [2024-04-24 00:31:14.394748] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.905 BaseBdev2 00:17:20.905 00:31:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:20.905 00:31:14 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:17:20.905 00:31:14 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:20.905 00:31:14 -- common/autotest_common.sh@887 -- # local i 00:17:20.905 00:31:14 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:20.905 00:31:14 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:20.905 00:31:14 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:20.905 00:31:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:21.163 [ 00:17:21.163 { 00:17:21.163 "name": "BaseBdev2", 00:17:21.163 "aliases": [ 00:17:21.163 "f9938f26-9aeb-4e5f-bd5d-047baac5712a" 00:17:21.163 ], 00:17:21.163 "product_name": "Malloc disk", 00:17:21.163 "block_size": 512, 00:17:21.163 "num_blocks": 65536, 00:17:21.163 "uuid": "f9938f26-9aeb-4e5f-bd5d-047baac5712a", 00:17:21.163 "assigned_rate_limits": { 00:17:21.163 "rw_ios_per_sec": 0, 00:17:21.163 "rw_mbytes_per_sec": 0, 00:17:21.163 "r_mbytes_per_sec": 0, 00:17:21.163 "w_mbytes_per_sec": 0 00:17:21.163 }, 00:17:21.163 "claimed": true, 00:17:21.163 "claim_type": "exclusive_write", 00:17:21.163 "zoned": false, 00:17:21.163 "supported_io_types": { 00:17:21.163 "read": true, 00:17:21.163 "write": true, 00:17:21.163 "unmap": true, 00:17:21.163 "write_zeroes": true, 00:17:21.163 "flush": true, 00:17:21.163 "reset": true, 00:17:21.163 "compare": false, 00:17:21.163 "compare_and_write": false, 00:17:21.163 "abort": true, 00:17:21.163 "nvme_admin": false, 00:17:21.163 "nvme_io": false 00:17:21.163 }, 00:17:21.163 "memory_domains": [ 00:17:21.163 { 00:17:21.163 "dma_device_id": "system", 00:17:21.163 "dma_device_type": 1 00:17:21.163 }, 00:17:21.163 { 00:17:21.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.163 "dma_device_type": 2 00:17:21.163 } 00:17:21.163 ], 00:17:21.163 "driver_specific": {} 00:17:21.163 } 00:17:21.163 ] 00:17:21.163 00:31:14 -- common/autotest_common.sh@893 -- # return 0 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.163 00:31:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.428 00:31:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.428 "name": "Existed_Raid", 00:17:21.428 "uuid": "b8b7eabd-8fe7-4a85-8b15-1c576faa9909", 00:17:21.428 "strip_size_kb": 64, 00:17:21.428 "state": "online", 00:17:21.428 "raid_level": "concat", 00:17:21.428 "superblock": true, 00:17:21.428 "num_base_bdevs": 2, 00:17:21.428 "num_base_bdevs_discovered": 2, 00:17:21.428 "num_base_bdevs_operational": 2, 00:17:21.428 "base_bdevs_list": [ 00:17:21.428 { 00:17:21.428 "name": "BaseBdev1", 00:17:21.428 "uuid": "d7bf9608-062d-4744-bb92-21aa06ce8aed", 00:17:21.428 "is_configured": true, 00:17:21.428 "data_offset": 2048, 00:17:21.428 "data_size": 63488 00:17:21.428 }, 00:17:21.428 { 00:17:21.428 "name": "BaseBdev2", 00:17:21.428 "uuid": "f9938f26-9aeb-4e5f-bd5d-047baac5712a", 00:17:21.428 "is_configured": true, 00:17:21.428 "data_offset": 2048, 00:17:21.428 "data_size": 63488 00:17:21.428 } 00:17:21.428 ] 00:17:21.428 }' 00:17:21.428 00:31:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.428 00:31:15 -- common/autotest_common.sh@10 -- # set +x 00:17:21.996 00:31:15 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:22.254 [2024-04-24 00:31:16.042468] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:22.254 [2024-04-24 00:31:16.042737] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.254 [2024-04-24 00:31:16.042890] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.512 00:31:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.771 00:31:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:22.771 "name": "Existed_Raid", 00:17:22.771 "uuid": "b8b7eabd-8fe7-4a85-8b15-1c576faa9909", 00:17:22.771 "strip_size_kb": 64, 00:17:22.771 "state": "offline", 00:17:22.771 "raid_level": "concat", 00:17:22.771 "superblock": true, 00:17:22.771 "num_base_bdevs": 2, 00:17:22.771 "num_base_bdevs_discovered": 1, 00:17:22.771 "num_base_bdevs_operational": 1, 00:17:22.771 "base_bdevs_list": [ 00:17:22.771 { 00:17:22.771 "name": null, 00:17:22.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.771 "is_configured": false, 00:17:22.771 "data_offset": 2048, 00:17:22.771 "data_size": 63488 00:17:22.771 }, 00:17:22.771 { 00:17:22.771 "name": "BaseBdev2", 00:17:22.771 "uuid": "f9938f26-9aeb-4e5f-bd5d-047baac5712a", 00:17:22.771 "is_configured": true, 00:17:22.771 "data_offset": 2048, 00:17:22.771 "data_size": 63488 00:17:22.771 } 00:17:22.771 ] 00:17:22.771 }' 00:17:22.771 00:31:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:22.771 00:31:16 -- common/autotest_common.sh@10 -- # set +x 00:17:23.706 00:31:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:23.706 00:31:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:23.706 00:31:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.706 00:31:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:23.706 00:31:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:23.706 00:31:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:23.706 00:31:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:23.965 [2024-04-24 00:31:17.748938] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:23.965 [2024-04-24 00:31:17.749178] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:17:24.224 00:31:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:24.224 00:31:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:24.224 00:31:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.224 00:31:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:24.482 00:31:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:24.482 00:31:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:24.482 00:31:18 -- bdev/bdev_raid.sh@287 -- # killprocess 121606 00:17:24.482 00:31:18 -- common/autotest_common.sh@936 -- # '[' -z 121606 ']' 00:17:24.482 00:31:18 -- common/autotest_common.sh@940 -- # kill -0 121606 00:17:24.482 00:31:18 -- common/autotest_common.sh@941 -- # uname 00:17:24.482 00:31:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.482 00:31:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121606 00:17:24.482 00:31:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:24.482 00:31:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:24.483 00:31:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121606' 00:17:24.483 killing process with pid 121606 00:17:24.483 00:31:18 -- common/autotest_common.sh@955 -- # kill 121606 00:17:24.483 [2024-04-24 00:31:18.196878] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.483 00:31:18 -- common/autotest_common.sh@960 -- # wait 121606 00:17:24.483 [2024-04-24 00:31:18.197139] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:25.884 ************************************ 00:17:25.884 END TEST raid_state_function_test_sb 00:17:25.884 ************************************ 00:17:25.884 00:31:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:25.884 00:17:25.884 real 0m12.458s 00:17:25.884 user 0m21.134s 00:17:25.884 sys 0m1.718s 00:17:25.884 00:31:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:25.884 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:17:25.885 00:31:19 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:17:25.885 00:31:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:25.885 00:31:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:25.885 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:17:26.143 ************************************ 00:17:26.143 START TEST raid_superblock_test 00:17:26.143 ************************************ 00:17:26.143 00:31:19 -- common/autotest_common.sh@1111 -- # raid_superblock_test concat 2 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@357 -- # raid_pid=121958 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@358 -- # waitforlisten 121958 /var/tmp/spdk-raid.sock 00:17:26.143 00:31:19 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:26.143 00:31:19 -- common/autotest_common.sh@817 -- # '[' -z 121958 ']' 00:17:26.143 00:31:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:26.143 00:31:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:26.143 00:31:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:26.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:26.144 00:31:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:26.144 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 [2024-04-24 00:31:19.761182] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:17:26.144 [2024-04-24 00:31:19.761618] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121958 ] 00:17:26.402 [2024-04-24 00:31:19.938330] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.402 [2024-04-24 00:31:20.180455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.660 [2024-04-24 00:31:20.407181] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.227 00:31:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:27.227 00:31:20 -- common/autotest_common.sh@850 -- # return 0 00:17:27.227 00:31:20 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:27.227 00:31:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:27.227 00:31:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:27.227 00:31:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:27.227 00:31:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:27.227 00:31:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:27.227 00:31:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:27.227 00:31:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:27.227 00:31:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:27.227 malloc1 00:17:27.227 00:31:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:27.485 [2024-04-24 00:31:21.189165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:27.485 [2024-04-24 00:31:21.189507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.486 [2024-04-24 00:31:21.189646] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:27.486 [2024-04-24 00:31:21.189776] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.486 [2024-04-24 00:31:21.192862] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.486 [2024-04-24 00:31:21.193064] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:27.486 pt1 00:17:27.486 00:31:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:27.486 00:31:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:27.486 00:31:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:27.486 00:31:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:27.486 00:31:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:27.486 00:31:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:27.486 00:31:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:27.486 00:31:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:27.486 00:31:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:28.053 malloc2 00:17:28.053 00:31:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:28.053 [2024-04-24 00:31:21.790312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:28.053 [2024-04-24 00:31:21.790837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.053 [2024-04-24 00:31:21.790918] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:28.053 [2024-04-24 00:31:21.791081] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.053 [2024-04-24 00:31:21.793440] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.053 [2024-04-24 00:31:21.793615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:28.053 pt2 00:17:28.053 00:31:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:28.053 00:31:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:28.053 00:31:21 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:17:28.313 [2024-04-24 00:31:22.066594] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:28.313 [2024-04-24 00:31:22.068910] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:28.313 [2024-04-24 00:31:22.069271] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:17:28.313 [2024-04-24 00:31:22.069394] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:28.313 [2024-04-24 00:31:22.069587] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:28.313 [2024-04-24 00:31:22.069954] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:17:28.313 [2024-04-24 00:31:22.070052] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:17:28.313 [2024-04-24 00:31:22.070313] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.313 00:31:22 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:28.313 00:31:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:28.313 00:31:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:28.313 00:31:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:28.313 00:31:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:28.313 00:31:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:28.313 00:31:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.313 00:31:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.313 00:31:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.313 00:31:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.313 00:31:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.313 00:31:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.572 00:31:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.572 "name": "raid_bdev1", 00:17:28.572 "uuid": "0d2bffe5-d41e-40f2-8a1e-ca9365e04a25", 00:17:28.572 "strip_size_kb": 64, 00:17:28.572 "state": "online", 00:17:28.572 "raid_level": "concat", 00:17:28.572 "superblock": true, 00:17:28.572 "num_base_bdevs": 2, 00:17:28.572 "num_base_bdevs_discovered": 2, 00:17:28.572 "num_base_bdevs_operational": 2, 00:17:28.572 "base_bdevs_list": [ 00:17:28.572 { 00:17:28.572 "name": "pt1", 00:17:28.572 "uuid": "63d2d9a0-a067-53a9-a396-fbc92b662c4a", 00:17:28.572 "is_configured": true, 00:17:28.572 "data_offset": 2048, 00:17:28.572 "data_size": 63488 00:17:28.572 }, 00:17:28.572 { 00:17:28.572 "name": "pt2", 00:17:28.572 "uuid": "63ca758c-7109-53e8-8197-14da4a6fd402", 00:17:28.572 "is_configured": true, 00:17:28.572 "data_offset": 2048, 00:17:28.572 "data_size": 63488 00:17:28.572 } 00:17:28.572 ] 00:17:28.572 }' 00:17:28.572 00:31:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.572 00:31:22 -- common/autotest_common.sh@10 -- # set +x 00:17:29.138 00:31:22 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:29.138 00:31:22 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:29.396 [2024-04-24 00:31:22.998983] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.396 00:31:23 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0d2bffe5-d41e-40f2-8a1e-ca9365e04a25 00:17:29.396 00:31:23 -- bdev/bdev_raid.sh@380 -- # '[' -z 0d2bffe5-d41e-40f2-8a1e-ca9365e04a25 ']' 00:17:29.396 00:31:23 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:29.655 [2024-04-24 00:31:23.270777] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.655 [2024-04-24 00:31:23.270990] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.655 [2024-04-24 00:31:23.271193] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.655 [2024-04-24 00:31:23.271339] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.655 [2024-04-24 00:31:23.271427] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:17:29.655 00:31:23 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.655 00:31:23 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:29.971 00:31:23 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:29.971 00:31:23 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:29.971 00:31:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:29.971 00:31:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:30.272 00:31:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:30.272 00:31:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:30.531 00:31:24 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:30.531 00:31:24 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:30.531 00:31:24 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:30.531 00:31:24 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:30.531 00:31:24 -- common/autotest_common.sh@638 -- # local es=0 00:17:30.531 00:31:24 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:30.531 00:31:24 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.531 00:31:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:30.531 00:31:24 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.790 00:31:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:30.790 00:31:24 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.790 00:31:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:30.790 00:31:24 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.790 00:31:24 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:30.790 00:31:24 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:17:30.790 [2024-04-24 00:31:24.547105] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:30.790 [2024-04-24 00:31:24.549550] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:30.790 [2024-04-24 00:31:24.549764] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:30.790 [2024-04-24 00:31:24.549966] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:30.790 [2024-04-24 00:31:24.550116] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.790 [2024-04-24 00:31:24.550225] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:17:30.790 request: 00:17:30.790 { 00:17:30.790 "name": "raid_bdev1", 00:17:30.790 "raid_level": "concat", 00:17:30.790 "base_bdevs": [ 00:17:30.790 "malloc1", 00:17:30.790 "malloc2" 00:17:30.790 ], 00:17:30.790 "superblock": false, 00:17:30.790 "strip_size_kb": 64, 00:17:30.790 "method": "bdev_raid_create", 00:17:30.790 "req_id": 1 00:17:30.790 } 00:17:30.790 Got JSON-RPC error response 00:17:30.790 response: 00:17:30.790 { 00:17:30.790 "code": -17, 00:17:30.790 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:30.790 } 00:17:30.790 00:31:24 -- common/autotest_common.sh@641 -- # es=1 00:17:30.790 00:31:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:30.790 00:31:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:30.790 00:31:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:30.790 00:31:24 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.790 00:31:24 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:31.050 00:31:24 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:31.050 00:31:24 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:31.050 00:31:24 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:31.308 [2024-04-24 00:31:25.087258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:31.308 [2024-04-24 00:31:25.087682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.308 [2024-04-24 00:31:25.087856] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:31.308 [2024-04-24 00:31:25.087974] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.308 [2024-04-24 00:31:25.090687] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.308 [2024-04-24 00:31:25.090891] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:31.308 [2024-04-24 00:31:25.091194] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:31.308 [2024-04-24 00:31:25.091346] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:31.308 pt1 00:17:31.567 00:31:25 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:17:31.567 00:31:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:31.567 00:31:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:31.567 00:31:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:31.567 00:31:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:31.567 00:31:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:31.567 00:31:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:31.567 00:31:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:31.567 00:31:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:31.567 00:31:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:31.567 00:31:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.567 00:31:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.826 00:31:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:31.826 "name": "raid_bdev1", 00:17:31.826 "uuid": "0d2bffe5-d41e-40f2-8a1e-ca9365e04a25", 00:17:31.826 "strip_size_kb": 64, 00:17:31.826 "state": "configuring", 00:17:31.826 "raid_level": "concat", 00:17:31.826 "superblock": true, 00:17:31.826 "num_base_bdevs": 2, 00:17:31.826 "num_base_bdevs_discovered": 1, 00:17:31.826 "num_base_bdevs_operational": 2, 00:17:31.826 "base_bdevs_list": [ 00:17:31.826 { 00:17:31.826 "name": "pt1", 00:17:31.826 "uuid": "63d2d9a0-a067-53a9-a396-fbc92b662c4a", 00:17:31.826 "is_configured": true, 00:17:31.826 "data_offset": 2048, 00:17:31.826 "data_size": 63488 00:17:31.826 }, 00:17:31.826 { 00:17:31.826 "name": null, 00:17:31.826 "uuid": "63ca758c-7109-53e8-8197-14da4a6fd402", 00:17:31.826 "is_configured": false, 00:17:31.826 "data_offset": 2048, 00:17:31.826 "data_size": 63488 00:17:31.826 } 00:17:31.826 ] 00:17:31.826 }' 00:17:31.826 00:31:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:31.826 00:31:25 -- common/autotest_common.sh@10 -- # set +x 00:17:32.393 00:31:26 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:17:32.393 00:31:26 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:32.393 00:31:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:32.393 00:31:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:32.652 [2024-04-24 00:31:26.335565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:32.652 [2024-04-24 00:31:26.335904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.652 [2024-04-24 00:31:26.335981] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:17:32.652 [2024-04-24 00:31:26.336125] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.652 [2024-04-24 00:31:26.336653] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.652 [2024-04-24 00:31:26.336820] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:32.652 [2024-04-24 00:31:26.337023] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:32.652 [2024-04-24 00:31:26.337128] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.652 [2024-04-24 00:31:26.337279] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:17:32.652 [2024-04-24 00:31:26.337466] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:32.652 [2024-04-24 00:31:26.337626] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:32.652 [2024-04-24 00:31:26.338074] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:17:32.652 [2024-04-24 00:31:26.338188] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:17:32.652 [2024-04-24 00:31:26.338408] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.652 pt2 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.652 00:31:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.910 00:31:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.910 "name": "raid_bdev1", 00:17:32.910 "uuid": "0d2bffe5-d41e-40f2-8a1e-ca9365e04a25", 00:17:32.910 "strip_size_kb": 64, 00:17:32.910 "state": "online", 00:17:32.910 "raid_level": "concat", 00:17:32.910 "superblock": true, 00:17:32.910 "num_base_bdevs": 2, 00:17:32.910 "num_base_bdevs_discovered": 2, 00:17:32.910 "num_base_bdevs_operational": 2, 00:17:32.910 "base_bdevs_list": [ 00:17:32.910 { 00:17:32.910 "name": "pt1", 00:17:32.910 "uuid": "63d2d9a0-a067-53a9-a396-fbc92b662c4a", 00:17:32.910 "is_configured": true, 00:17:32.910 "data_offset": 2048, 00:17:32.910 "data_size": 63488 00:17:32.910 }, 00:17:32.910 { 00:17:32.910 "name": "pt2", 00:17:32.910 "uuid": "63ca758c-7109-53e8-8197-14da4a6fd402", 00:17:32.910 "is_configured": true, 00:17:32.910 "data_offset": 2048, 00:17:32.910 "data_size": 63488 00:17:32.910 } 00:17:32.910 ] 00:17:32.910 }' 00:17:32.910 00:31:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.910 00:31:26 -- common/autotest_common.sh@10 -- # set +x 00:17:33.843 00:31:27 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:33.843 00:31:27 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:33.843 [2024-04-24 00:31:27.572061] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.843 00:31:27 -- bdev/bdev_raid.sh@430 -- # '[' 0d2bffe5-d41e-40f2-8a1e-ca9365e04a25 '!=' 0d2bffe5-d41e-40f2-8a1e-ca9365e04a25 ']' 00:17:33.843 00:31:27 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:33.843 00:31:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:33.843 00:31:27 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:33.843 00:31:27 -- bdev/bdev_raid.sh@511 -- # killprocess 121958 00:17:33.843 00:31:27 -- common/autotest_common.sh@936 -- # '[' -z 121958 ']' 00:17:33.843 00:31:27 -- common/autotest_common.sh@940 -- # kill -0 121958 00:17:33.843 00:31:27 -- common/autotest_common.sh@941 -- # uname 00:17:33.843 00:31:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:33.843 00:31:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121958 00:17:33.843 killing process with pid 121958 00:17:33.843 00:31:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:33.843 00:31:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:33.843 00:31:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121958' 00:17:33.843 00:31:27 -- common/autotest_common.sh@955 -- # kill 121958 00:17:33.843 [2024-04-24 00:31:27.614568] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.843 00:31:27 -- common/autotest_common.sh@960 -- # wait 121958 00:17:33.843 [2024-04-24 00:31:27.614654] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.843 [2024-04-24 00:31:27.614704] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.843 [2024-04-24 00:31:27.614713] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:17:34.102 [2024-04-24 00:31:27.836976] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.533 ************************************ 00:17:35.533 END TEST raid_superblock_test 00:17:35.533 ************************************ 00:17:35.533 00:31:29 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:35.533 00:17:35.533 real 0m9.565s 00:17:35.533 user 0m15.846s 00:17:35.533 sys 0m1.298s 00:17:35.533 00:31:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:35.533 00:31:29 -- common/autotest_common.sh@10 -- # set +x 00:17:35.533 00:31:29 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:35.533 00:31:29 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:17:35.533 00:31:29 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:35.533 00:31:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:35.533 00:31:29 -- common/autotest_common.sh@10 -- # set +x 00:17:35.792 ************************************ 00:17:35.792 START TEST raid_state_function_test 00:17:35.792 ************************************ 00:17:35.792 00:31:29 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 2 false 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=122225 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122225' 00:17:35.792 Process raid pid: 122225 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122225 /var/tmp/spdk-raid.sock 00:17:35.792 00:31:29 -- common/autotest_common.sh@817 -- # '[' -z 122225 ']' 00:17:35.792 00:31:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:35.792 00:31:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.792 00:31:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:35.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:35.792 00:31:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.792 00:31:29 -- common/autotest_common.sh@10 -- # set +x 00:17:35.792 00:31:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:35.792 [2024-04-24 00:31:29.435598] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:17:35.792 [2024-04-24 00:31:29.435782] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.051 [2024-04-24 00:31:29.612635] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.051 [2024-04-24 00:31:29.822870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.310 [2024-04-24 00:31:30.042062] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.876 00:31:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:36.876 00:31:30 -- common/autotest_common.sh@850 -- # return 0 00:17:36.876 00:31:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:37.134 [2024-04-24 00:31:30.669502] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.134 [2024-04-24 00:31:30.669597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.134 [2024-04-24 00:31:30.669614] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.134 [2024-04-24 00:31:30.669643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.134 00:31:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:37.134 00:31:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:37.134 00:31:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:37.134 00:31:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:37.134 00:31:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:37.134 00:31:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:37.134 00:31:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.134 00:31:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.134 00:31:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.134 00:31:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.134 00:31:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.134 00:31:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.393 00:31:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.393 "name": "Existed_Raid", 00:17:37.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.393 "strip_size_kb": 0, 00:17:37.393 "state": "configuring", 00:17:37.393 "raid_level": "raid1", 00:17:37.393 "superblock": false, 00:17:37.393 "num_base_bdevs": 2, 00:17:37.393 "num_base_bdevs_discovered": 0, 00:17:37.393 "num_base_bdevs_operational": 2, 00:17:37.393 "base_bdevs_list": [ 00:17:37.393 { 00:17:37.393 "name": "BaseBdev1", 00:17:37.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.393 "is_configured": false, 00:17:37.393 "data_offset": 0, 00:17:37.393 "data_size": 0 00:17:37.393 }, 00:17:37.393 { 00:17:37.393 "name": "BaseBdev2", 00:17:37.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.393 "is_configured": false, 00:17:37.393 "data_offset": 0, 00:17:37.393 "data_size": 0 00:17:37.393 } 00:17:37.393 ] 00:17:37.393 }' 00:17:37.393 00:31:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.393 00:31:30 -- common/autotest_common.sh@10 -- # set +x 00:17:37.960 00:31:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:37.960 [2024-04-24 00:31:31.721566] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:37.960 [2024-04-24 00:31:31.721627] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:17:37.960 00:31:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:38.220 [2024-04-24 00:31:31.993622] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:38.220 [2024-04-24 00:31:31.993726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:38.220 [2024-04-24 00:31:31.993736] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.220 [2024-04-24 00:31:31.993765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.479 00:31:32 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:38.479 [2024-04-24 00:31:32.222076] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.479 BaseBdev1 00:17:38.479 00:31:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:38.479 00:31:32 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:38.479 00:31:32 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:38.479 00:31:32 -- common/autotest_common.sh@887 -- # local i 00:17:38.479 00:31:32 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:38.479 00:31:32 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:38.479 00:31:32 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:38.738 00:31:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:38.997 [ 00:17:38.997 { 00:17:38.997 "name": "BaseBdev1", 00:17:38.997 "aliases": [ 00:17:38.997 "08d9060d-f490-45f9-9751-cd063c44441c" 00:17:38.997 ], 00:17:38.997 "product_name": "Malloc disk", 00:17:38.997 "block_size": 512, 00:17:38.997 "num_blocks": 65536, 00:17:38.997 "uuid": "08d9060d-f490-45f9-9751-cd063c44441c", 00:17:38.997 "assigned_rate_limits": { 00:17:38.997 "rw_ios_per_sec": 0, 00:17:38.997 "rw_mbytes_per_sec": 0, 00:17:38.997 "r_mbytes_per_sec": 0, 00:17:38.997 "w_mbytes_per_sec": 0 00:17:38.997 }, 00:17:38.997 "claimed": true, 00:17:38.997 "claim_type": "exclusive_write", 00:17:38.997 "zoned": false, 00:17:38.997 "supported_io_types": { 00:17:38.997 "read": true, 00:17:38.997 "write": true, 00:17:38.997 "unmap": true, 00:17:38.997 "write_zeroes": true, 00:17:38.997 "flush": true, 00:17:38.997 "reset": true, 00:17:38.997 "compare": false, 00:17:38.997 "compare_and_write": false, 00:17:38.997 "abort": true, 00:17:38.997 "nvme_admin": false, 00:17:38.997 "nvme_io": false 00:17:38.997 }, 00:17:38.997 "memory_domains": [ 00:17:38.997 { 00:17:38.997 "dma_device_id": "system", 00:17:38.997 "dma_device_type": 1 00:17:38.997 }, 00:17:38.997 { 00:17:38.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.997 "dma_device_type": 2 00:17:38.997 } 00:17:38.997 ], 00:17:38.997 "driver_specific": {} 00:17:38.997 } 00:17:38.997 ] 00:17:38.997 00:31:32 -- common/autotest_common.sh@893 -- # return 0 00:17:38.997 00:31:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:38.997 00:31:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:38.997 00:31:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:38.997 00:31:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:38.997 00:31:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:38.997 00:31:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:38.997 00:31:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.997 00:31:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.997 00:31:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.997 00:31:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.997 00:31:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.997 00:31:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.255 00:31:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:39.255 "name": "Existed_Raid", 00:17:39.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.255 "strip_size_kb": 0, 00:17:39.255 "state": "configuring", 00:17:39.255 "raid_level": "raid1", 00:17:39.255 "superblock": false, 00:17:39.255 "num_base_bdevs": 2, 00:17:39.255 "num_base_bdevs_discovered": 1, 00:17:39.255 "num_base_bdevs_operational": 2, 00:17:39.255 "base_bdevs_list": [ 00:17:39.255 { 00:17:39.255 "name": "BaseBdev1", 00:17:39.255 "uuid": "08d9060d-f490-45f9-9751-cd063c44441c", 00:17:39.255 "is_configured": true, 00:17:39.255 "data_offset": 0, 00:17:39.255 "data_size": 65536 00:17:39.255 }, 00:17:39.255 { 00:17:39.255 "name": "BaseBdev2", 00:17:39.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.255 "is_configured": false, 00:17:39.255 "data_offset": 0, 00:17:39.255 "data_size": 0 00:17:39.255 } 00:17:39.256 ] 00:17:39.256 }' 00:17:39.256 00:31:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:39.256 00:31:33 -- common/autotest_common.sh@10 -- # set +x 00:17:40.207 00:31:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:40.207 [2024-04-24 00:31:33.918528] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:40.207 [2024-04-24 00:31:33.918588] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:17:40.207 00:31:33 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:40.207 00:31:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:40.465 [2024-04-24 00:31:34.222592] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.465 [2024-04-24 00:31:34.224803] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.465 [2024-04-24 00:31:34.224868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.465 00:31:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.723 00:31:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.723 "name": "Existed_Raid", 00:17:40.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.723 "strip_size_kb": 0, 00:17:40.723 "state": "configuring", 00:17:40.723 "raid_level": "raid1", 00:17:40.723 "superblock": false, 00:17:40.723 "num_base_bdevs": 2, 00:17:40.723 "num_base_bdevs_discovered": 1, 00:17:40.723 "num_base_bdevs_operational": 2, 00:17:40.723 "base_bdevs_list": [ 00:17:40.723 { 00:17:40.723 "name": "BaseBdev1", 00:17:40.723 "uuid": "08d9060d-f490-45f9-9751-cd063c44441c", 00:17:40.723 "is_configured": true, 00:17:40.723 "data_offset": 0, 00:17:40.723 "data_size": 65536 00:17:40.723 }, 00:17:40.723 { 00:17:40.723 "name": "BaseBdev2", 00:17:40.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.723 "is_configured": false, 00:17:40.723 "data_offset": 0, 00:17:40.723 "data_size": 0 00:17:40.723 } 00:17:40.723 ] 00:17:40.723 }' 00:17:40.723 00:31:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.723 00:31:34 -- common/autotest_common.sh@10 -- # set +x 00:17:41.656 00:31:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:41.914 [2024-04-24 00:31:35.447057] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:41.914 [2024-04-24 00:31:35.447116] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:17:41.914 [2024-04-24 00:31:35.447125] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:41.914 [2024-04-24 00:31:35.447241] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:41.914 [2024-04-24 00:31:35.447560] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:17:41.914 [2024-04-24 00:31:35.447580] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:17:41.914 [2024-04-24 00:31:35.447824] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.914 BaseBdev2 00:17:41.914 00:31:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:41.914 00:31:35 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:17:41.914 00:31:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:41.914 00:31:35 -- common/autotest_common.sh@887 -- # local i 00:17:41.914 00:31:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:41.914 00:31:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:41.914 00:31:35 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.172 00:31:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:42.430 [ 00:17:42.430 { 00:17:42.430 "name": "BaseBdev2", 00:17:42.430 "aliases": [ 00:17:42.430 "e7a29381-1fbf-4686-a5c9-53a78fc9714e" 00:17:42.430 ], 00:17:42.430 "product_name": "Malloc disk", 00:17:42.430 "block_size": 512, 00:17:42.430 "num_blocks": 65536, 00:17:42.430 "uuid": "e7a29381-1fbf-4686-a5c9-53a78fc9714e", 00:17:42.430 "assigned_rate_limits": { 00:17:42.430 "rw_ios_per_sec": 0, 00:17:42.430 "rw_mbytes_per_sec": 0, 00:17:42.430 "r_mbytes_per_sec": 0, 00:17:42.430 "w_mbytes_per_sec": 0 00:17:42.430 }, 00:17:42.430 "claimed": true, 00:17:42.430 "claim_type": "exclusive_write", 00:17:42.430 "zoned": false, 00:17:42.430 "supported_io_types": { 00:17:42.430 "read": true, 00:17:42.430 "write": true, 00:17:42.430 "unmap": true, 00:17:42.430 "write_zeroes": true, 00:17:42.430 "flush": true, 00:17:42.430 "reset": true, 00:17:42.430 "compare": false, 00:17:42.430 "compare_and_write": false, 00:17:42.430 "abort": true, 00:17:42.430 "nvme_admin": false, 00:17:42.430 "nvme_io": false 00:17:42.430 }, 00:17:42.430 "memory_domains": [ 00:17:42.430 { 00:17:42.430 "dma_device_id": "system", 00:17:42.430 "dma_device_type": 1 00:17:42.430 }, 00:17:42.430 { 00:17:42.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.430 "dma_device_type": 2 00:17:42.430 } 00:17:42.430 ], 00:17:42.430 "driver_specific": {} 00:17:42.430 } 00:17:42.430 ] 00:17:42.430 00:31:36 -- common/autotest_common.sh@893 -- # return 0 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.430 00:31:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.689 00:31:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.689 "name": "Existed_Raid", 00:17:42.689 "uuid": "ab96ed6b-c686-49f6-bed5-9805e41b2fb2", 00:17:42.689 "strip_size_kb": 0, 00:17:42.689 "state": "online", 00:17:42.689 "raid_level": "raid1", 00:17:42.689 "superblock": false, 00:17:42.689 "num_base_bdevs": 2, 00:17:42.689 "num_base_bdevs_discovered": 2, 00:17:42.689 "num_base_bdevs_operational": 2, 00:17:42.689 "base_bdevs_list": [ 00:17:42.689 { 00:17:42.689 "name": "BaseBdev1", 00:17:42.689 "uuid": "08d9060d-f490-45f9-9751-cd063c44441c", 00:17:42.689 "is_configured": true, 00:17:42.689 "data_offset": 0, 00:17:42.689 "data_size": 65536 00:17:42.689 }, 00:17:42.689 { 00:17:42.689 "name": "BaseBdev2", 00:17:42.689 "uuid": "e7a29381-1fbf-4686-a5c9-53a78fc9714e", 00:17:42.689 "is_configured": true, 00:17:42.689 "data_offset": 0, 00:17:42.689 "data_size": 65536 00:17:42.689 } 00:17:42.689 ] 00:17:42.689 }' 00:17:42.689 00:31:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.689 00:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:43.254 00:31:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:43.511 [2024-04-24 00:31:37.263630] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.841 00:31:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.101 00:31:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.101 "name": "Existed_Raid", 00:17:44.101 "uuid": "ab96ed6b-c686-49f6-bed5-9805e41b2fb2", 00:17:44.101 "strip_size_kb": 0, 00:17:44.101 "state": "online", 00:17:44.101 "raid_level": "raid1", 00:17:44.101 "superblock": false, 00:17:44.101 "num_base_bdevs": 2, 00:17:44.101 "num_base_bdevs_discovered": 1, 00:17:44.101 "num_base_bdevs_operational": 1, 00:17:44.101 "base_bdevs_list": [ 00:17:44.101 { 00:17:44.101 "name": null, 00:17:44.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.101 "is_configured": false, 00:17:44.101 "data_offset": 0, 00:17:44.101 "data_size": 65536 00:17:44.101 }, 00:17:44.101 { 00:17:44.101 "name": "BaseBdev2", 00:17:44.101 "uuid": "e7a29381-1fbf-4686-a5c9-53a78fc9714e", 00:17:44.101 "is_configured": true, 00:17:44.101 "data_offset": 0, 00:17:44.101 "data_size": 65536 00:17:44.101 } 00:17:44.101 ] 00:17:44.101 }' 00:17:44.101 00:31:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.101 00:31:37 -- common/autotest_common.sh@10 -- # set +x 00:17:44.667 00:31:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:44.667 00:31:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:44.667 00:31:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.667 00:31:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:44.925 00:31:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:44.925 00:31:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:44.925 00:31:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:45.182 [2024-04-24 00:31:38.849200] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:45.182 [2024-04-24 00:31:38.849311] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.182 [2024-04-24 00:31:38.957554] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.183 [2024-04-24 00:31:38.957697] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.183 [2024-04-24 00:31:38.957710] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:17:45.440 00:31:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:45.440 00:31:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:45.440 00:31:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.440 00:31:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:45.440 00:31:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:45.440 00:31:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:45.440 00:31:39 -- bdev/bdev_raid.sh@287 -- # killprocess 122225 00:17:45.440 00:31:39 -- common/autotest_common.sh@936 -- # '[' -z 122225 ']' 00:17:45.440 00:31:39 -- common/autotest_common.sh@940 -- # kill -0 122225 00:17:45.440 00:31:39 -- common/autotest_common.sh@941 -- # uname 00:17:45.440 00:31:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.440 00:31:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122225 00:17:45.440 00:31:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:45.440 00:31:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:45.440 00:31:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122225' 00:17:45.440 killing process with pid 122225 00:17:45.440 00:31:39 -- common/autotest_common.sh@955 -- # kill 122225 00:17:45.440 [2024-04-24 00:31:39.222350] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:45.440 00:31:39 -- common/autotest_common.sh@960 -- # wait 122225 00:17:45.440 [2024-04-24 00:31:39.222490] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:47.341 ************************************ 00:17:47.341 END TEST raid_state_function_test 00:17:47.341 ************************************ 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:47.341 00:17:47.341 real 0m11.263s 00:17:47.341 user 0m18.944s 00:17:47.341 sys 0m1.644s 00:17:47.341 00:31:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:47.341 00:31:40 -- common/autotest_common.sh@10 -- # set +x 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:17:47.341 00:31:40 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:47.341 00:31:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:47.341 00:31:40 -- common/autotest_common.sh@10 -- # set +x 00:17:47.341 ************************************ 00:17:47.341 START TEST raid_state_function_test_sb 00:17:47.341 ************************************ 00:17:47.341 00:31:40 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 2 true 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@226 -- # raid_pid=122556 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122556' 00:17:47.341 Process raid pid: 122556 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122556 /var/tmp/spdk-raid.sock 00:17:47.341 00:31:40 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:47.341 00:31:40 -- common/autotest_common.sh@817 -- # '[' -z 122556 ']' 00:17:47.341 00:31:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:47.341 00:31:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:47.341 00:31:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:47.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:47.341 00:31:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:47.341 00:31:40 -- common/autotest_common.sh@10 -- # set +x 00:17:47.341 [2024-04-24 00:31:40.807765] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:17:47.341 [2024-04-24 00:31:40.807960] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.341 [2024-04-24 00:31:40.996324] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.599 [2024-04-24 00:31:41.271852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.857 [2024-04-24 00:31:41.476916] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.117 00:31:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:48.117 00:31:41 -- common/autotest_common.sh@850 -- # return 0 00:17:48.117 00:31:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:48.375 [2024-04-24 00:31:42.014425] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.375 [2024-04-24 00:31:42.014713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.375 [2024-04-24 00:31:42.014840] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:48.375 [2024-04-24 00:31:42.014899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:48.375 00:31:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:48.375 00:31:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:48.375 00:31:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:48.375 00:31:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:48.375 00:31:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:48.375 00:31:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:48.375 00:31:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.375 00:31:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.375 00:31:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.375 00:31:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.375 00:31:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.375 00:31:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.634 00:31:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.634 "name": "Existed_Raid", 00:17:48.634 "uuid": "cbe853cf-f1e4-4bba-b7b2-5effc8b397ab", 00:17:48.634 "strip_size_kb": 0, 00:17:48.634 "state": "configuring", 00:17:48.634 "raid_level": "raid1", 00:17:48.634 "superblock": true, 00:17:48.634 "num_base_bdevs": 2, 00:17:48.634 "num_base_bdevs_discovered": 0, 00:17:48.634 "num_base_bdevs_operational": 2, 00:17:48.634 "base_bdevs_list": [ 00:17:48.634 { 00:17:48.634 "name": "BaseBdev1", 00:17:48.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.634 "is_configured": false, 00:17:48.634 "data_offset": 0, 00:17:48.634 "data_size": 0 00:17:48.634 }, 00:17:48.634 { 00:17:48.634 "name": "BaseBdev2", 00:17:48.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.634 "is_configured": false, 00:17:48.634 "data_offset": 0, 00:17:48.634 "data_size": 0 00:17:48.634 } 00:17:48.634 ] 00:17:48.634 }' 00:17:48.634 00:31:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.634 00:31:42 -- common/autotest_common.sh@10 -- # set +x 00:17:49.203 00:31:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:49.461 [2024-04-24 00:31:43.090463] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:49.461 [2024-04-24 00:31:43.090522] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:17:49.461 00:31:43 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:49.719 [2024-04-24 00:31:43.378753] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:49.719 [2024-04-24 00:31:43.378864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:49.719 [2024-04-24 00:31:43.378877] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.719 [2024-04-24 00:31:43.378909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.719 00:31:43 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:49.977 [2024-04-24 00:31:43.629264] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.977 BaseBdev1 00:17:49.977 00:31:43 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:49.977 00:31:43 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:49.977 00:31:43 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:49.977 00:31:43 -- common/autotest_common.sh@887 -- # local i 00:17:49.977 00:31:43 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:49.977 00:31:43 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:49.977 00:31:43 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:50.235 00:31:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:50.493 [ 00:17:50.493 { 00:17:50.493 "name": "BaseBdev1", 00:17:50.493 "aliases": [ 00:17:50.493 "fb109a1d-8856-4f91-a73b-815ce1b5a82f" 00:17:50.493 ], 00:17:50.493 "product_name": "Malloc disk", 00:17:50.493 "block_size": 512, 00:17:50.493 "num_blocks": 65536, 00:17:50.493 "uuid": "fb109a1d-8856-4f91-a73b-815ce1b5a82f", 00:17:50.493 "assigned_rate_limits": { 00:17:50.493 "rw_ios_per_sec": 0, 00:17:50.493 "rw_mbytes_per_sec": 0, 00:17:50.493 "r_mbytes_per_sec": 0, 00:17:50.493 "w_mbytes_per_sec": 0 00:17:50.493 }, 00:17:50.493 "claimed": true, 00:17:50.493 "claim_type": "exclusive_write", 00:17:50.493 "zoned": false, 00:17:50.493 "supported_io_types": { 00:17:50.493 "read": true, 00:17:50.493 "write": true, 00:17:50.493 "unmap": true, 00:17:50.493 "write_zeroes": true, 00:17:50.493 "flush": true, 00:17:50.493 "reset": true, 00:17:50.493 "compare": false, 00:17:50.493 "compare_and_write": false, 00:17:50.493 "abort": true, 00:17:50.493 "nvme_admin": false, 00:17:50.493 "nvme_io": false 00:17:50.493 }, 00:17:50.493 "memory_domains": [ 00:17:50.493 { 00:17:50.493 "dma_device_id": "system", 00:17:50.493 "dma_device_type": 1 00:17:50.493 }, 00:17:50.493 { 00:17:50.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.493 "dma_device_type": 2 00:17:50.493 } 00:17:50.493 ], 00:17:50.493 "driver_specific": {} 00:17:50.493 } 00:17:50.493 ] 00:17:50.493 00:31:44 -- common/autotest_common.sh@893 -- # return 0 00:17:50.493 00:31:44 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:50.493 00:31:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:50.493 00:31:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:50.493 00:31:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:50.493 00:31:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:50.493 00:31:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:50.493 00:31:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:50.493 00:31:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:50.493 00:31:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:50.493 00:31:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:50.493 00:31:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.493 00:31:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.751 00:31:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:50.751 "name": "Existed_Raid", 00:17:50.751 "uuid": "02ffb77d-37c7-400f-aa57-c3604b35f3dd", 00:17:50.751 "strip_size_kb": 0, 00:17:50.751 "state": "configuring", 00:17:50.751 "raid_level": "raid1", 00:17:50.751 "superblock": true, 00:17:50.751 "num_base_bdevs": 2, 00:17:50.751 "num_base_bdevs_discovered": 1, 00:17:50.751 "num_base_bdevs_operational": 2, 00:17:50.751 "base_bdevs_list": [ 00:17:50.751 { 00:17:50.751 "name": "BaseBdev1", 00:17:50.751 "uuid": "fb109a1d-8856-4f91-a73b-815ce1b5a82f", 00:17:50.751 "is_configured": true, 00:17:50.751 "data_offset": 2048, 00:17:50.751 "data_size": 63488 00:17:50.751 }, 00:17:50.751 { 00:17:50.751 "name": "BaseBdev2", 00:17:50.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.751 "is_configured": false, 00:17:50.751 "data_offset": 0, 00:17:50.751 "data_size": 0 00:17:50.751 } 00:17:50.751 ] 00:17:50.751 }' 00:17:50.751 00:31:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:50.751 00:31:44 -- common/autotest_common.sh@10 -- # set +x 00:17:51.315 00:31:45 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:51.573 [2024-04-24 00:31:45.265766] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:51.574 [2024-04-24 00:31:45.266020] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:17:51.574 00:31:45 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:51.574 00:31:45 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:52.139 00:31:45 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:52.139 BaseBdev1 00:17:52.460 00:31:45 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:52.460 00:31:45 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:52.460 00:31:45 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:52.460 00:31:45 -- common/autotest_common.sh@887 -- # local i 00:17:52.460 00:31:45 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:52.460 00:31:45 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:52.460 00:31:45 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.460 00:31:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:52.718 [ 00:17:52.718 { 00:17:52.718 "name": "BaseBdev1", 00:17:52.718 "aliases": [ 00:17:52.718 "8a462afc-a0be-4882-a7cd-db636fc0935d" 00:17:52.718 ], 00:17:52.718 "product_name": "Malloc disk", 00:17:52.718 "block_size": 512, 00:17:52.718 "num_blocks": 65536, 00:17:52.718 "uuid": "8a462afc-a0be-4882-a7cd-db636fc0935d", 00:17:52.718 "assigned_rate_limits": { 00:17:52.718 "rw_ios_per_sec": 0, 00:17:52.718 "rw_mbytes_per_sec": 0, 00:17:52.718 "r_mbytes_per_sec": 0, 00:17:52.718 "w_mbytes_per_sec": 0 00:17:52.718 }, 00:17:52.718 "claimed": false, 00:17:52.718 "zoned": false, 00:17:52.718 "supported_io_types": { 00:17:52.718 "read": true, 00:17:52.718 "write": true, 00:17:52.718 "unmap": true, 00:17:52.718 "write_zeroes": true, 00:17:52.718 "flush": true, 00:17:52.718 "reset": true, 00:17:52.718 "compare": false, 00:17:52.718 "compare_and_write": false, 00:17:52.718 "abort": true, 00:17:52.718 "nvme_admin": false, 00:17:52.718 "nvme_io": false 00:17:52.718 }, 00:17:52.718 "memory_domains": [ 00:17:52.718 { 00:17:52.718 "dma_device_id": "system", 00:17:52.718 "dma_device_type": 1 00:17:52.718 }, 00:17:52.718 { 00:17:52.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.718 "dma_device_type": 2 00:17:52.718 } 00:17:52.718 ], 00:17:52.718 "driver_specific": {} 00:17:52.718 } 00:17:52.718 ] 00:17:52.718 00:31:46 -- common/autotest_common.sh@893 -- # return 0 00:17:52.718 00:31:46 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:52.976 [2024-04-24 00:31:46.561641] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.976 [2024-04-24 00:31:46.564687] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.976 [2024-04-24 00:31:46.564891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.976 00:31:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.234 00:31:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:53.234 "name": "Existed_Raid", 00:17:53.234 "uuid": "ececa7c9-218c-4dea-b665-9aedbfdb6beb", 00:17:53.234 "strip_size_kb": 0, 00:17:53.234 "state": "configuring", 00:17:53.234 "raid_level": "raid1", 00:17:53.234 "superblock": true, 00:17:53.234 "num_base_bdevs": 2, 00:17:53.234 "num_base_bdevs_discovered": 1, 00:17:53.234 "num_base_bdevs_operational": 2, 00:17:53.234 "base_bdevs_list": [ 00:17:53.234 { 00:17:53.234 "name": "BaseBdev1", 00:17:53.234 "uuid": "8a462afc-a0be-4882-a7cd-db636fc0935d", 00:17:53.234 "is_configured": true, 00:17:53.234 "data_offset": 2048, 00:17:53.234 "data_size": 63488 00:17:53.234 }, 00:17:53.234 { 00:17:53.234 "name": "BaseBdev2", 00:17:53.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.234 "is_configured": false, 00:17:53.234 "data_offset": 0, 00:17:53.234 "data_size": 0 00:17:53.234 } 00:17:53.234 ] 00:17:53.234 }' 00:17:53.234 00:31:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:53.234 00:31:46 -- common/autotest_common.sh@10 -- # set +x 00:17:53.799 00:31:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:54.058 [2024-04-24 00:31:47.693346] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.058 [2024-04-24 00:31:47.693789] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:17:54.058 [2024-04-24 00:31:47.693916] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:54.058 [2024-04-24 00:31:47.694113] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:54.058 [2024-04-24 00:31:47.694523] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:17:54.058 [2024-04-24 00:31:47.694632] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:17:54.058 [2024-04-24 00:31:47.694877] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.058 BaseBdev2 00:17:54.058 00:31:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:54.058 00:31:47 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:17:54.058 00:31:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:54.058 00:31:47 -- common/autotest_common.sh@887 -- # local i 00:17:54.058 00:31:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:54.058 00:31:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:54.058 00:31:47 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.316 00:31:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:54.580 [ 00:17:54.580 { 00:17:54.580 "name": "BaseBdev2", 00:17:54.580 "aliases": [ 00:17:54.580 "4f369a87-c8c6-416e-814a-01b746a26d2d" 00:17:54.580 ], 00:17:54.580 "product_name": "Malloc disk", 00:17:54.580 "block_size": 512, 00:17:54.580 "num_blocks": 65536, 00:17:54.580 "uuid": "4f369a87-c8c6-416e-814a-01b746a26d2d", 00:17:54.580 "assigned_rate_limits": { 00:17:54.580 "rw_ios_per_sec": 0, 00:17:54.580 "rw_mbytes_per_sec": 0, 00:17:54.580 "r_mbytes_per_sec": 0, 00:17:54.580 "w_mbytes_per_sec": 0 00:17:54.580 }, 00:17:54.580 "claimed": true, 00:17:54.580 "claim_type": "exclusive_write", 00:17:54.580 "zoned": false, 00:17:54.580 "supported_io_types": { 00:17:54.580 "read": true, 00:17:54.580 "write": true, 00:17:54.580 "unmap": true, 00:17:54.580 "write_zeroes": true, 00:17:54.580 "flush": true, 00:17:54.580 "reset": true, 00:17:54.580 "compare": false, 00:17:54.580 "compare_and_write": false, 00:17:54.580 "abort": true, 00:17:54.580 "nvme_admin": false, 00:17:54.580 "nvme_io": false 00:17:54.580 }, 00:17:54.580 "memory_domains": [ 00:17:54.580 { 00:17:54.580 "dma_device_id": "system", 00:17:54.580 "dma_device_type": 1 00:17:54.580 }, 00:17:54.580 { 00:17:54.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.580 "dma_device_type": 2 00:17:54.580 } 00:17:54.580 ], 00:17:54.580 "driver_specific": {} 00:17:54.580 } 00:17:54.580 ] 00:17:54.580 00:31:48 -- common/autotest_common.sh@893 -- # return 0 00:17:54.580 00:31:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:54.580 00:31:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:54.580 00:31:48 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:54.580 00:31:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:54.580 00:31:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:54.581 00:31:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:54.581 00:31:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:54.581 00:31:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:54.581 00:31:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.581 00:31:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.581 00:31:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.581 00:31:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.581 00:31:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.581 00:31:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.839 00:31:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.839 "name": "Existed_Raid", 00:17:54.839 "uuid": "ececa7c9-218c-4dea-b665-9aedbfdb6beb", 00:17:54.839 "strip_size_kb": 0, 00:17:54.839 "state": "online", 00:17:54.839 "raid_level": "raid1", 00:17:54.839 "superblock": true, 00:17:54.839 "num_base_bdevs": 2, 00:17:54.839 "num_base_bdevs_discovered": 2, 00:17:54.839 "num_base_bdevs_operational": 2, 00:17:54.839 "base_bdevs_list": [ 00:17:54.839 { 00:17:54.839 "name": "BaseBdev1", 00:17:54.839 "uuid": "8a462afc-a0be-4882-a7cd-db636fc0935d", 00:17:54.839 "is_configured": true, 00:17:54.839 "data_offset": 2048, 00:17:54.839 "data_size": 63488 00:17:54.839 }, 00:17:54.839 { 00:17:54.839 "name": "BaseBdev2", 00:17:54.839 "uuid": "4f369a87-c8c6-416e-814a-01b746a26d2d", 00:17:54.839 "is_configured": true, 00:17:54.839 "data_offset": 2048, 00:17:54.839 "data_size": 63488 00:17:54.839 } 00:17:54.839 ] 00:17:54.839 }' 00:17:54.839 00:31:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.839 00:31:48 -- common/autotest_common.sh@10 -- # set +x 00:17:55.405 00:31:49 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:55.664 [2024-04-24 00:31:49.383566] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.922 00:31:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.181 00:31:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:56.181 "name": "Existed_Raid", 00:17:56.181 "uuid": "ececa7c9-218c-4dea-b665-9aedbfdb6beb", 00:17:56.181 "strip_size_kb": 0, 00:17:56.181 "state": "online", 00:17:56.181 "raid_level": "raid1", 00:17:56.181 "superblock": true, 00:17:56.181 "num_base_bdevs": 2, 00:17:56.181 "num_base_bdevs_discovered": 1, 00:17:56.181 "num_base_bdevs_operational": 1, 00:17:56.181 "base_bdevs_list": [ 00:17:56.181 { 00:17:56.181 "name": null, 00:17:56.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.181 "is_configured": false, 00:17:56.181 "data_offset": 2048, 00:17:56.181 "data_size": 63488 00:17:56.181 }, 00:17:56.181 { 00:17:56.181 "name": "BaseBdev2", 00:17:56.181 "uuid": "4f369a87-c8c6-416e-814a-01b746a26d2d", 00:17:56.181 "is_configured": true, 00:17:56.181 "data_offset": 2048, 00:17:56.181 "data_size": 63488 00:17:56.181 } 00:17:56.181 ] 00:17:56.181 }' 00:17:56.181 00:31:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:56.181 00:31:49 -- common/autotest_common.sh@10 -- # set +x 00:17:56.746 00:31:50 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:56.746 00:31:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:56.746 00:31:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.746 00:31:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:57.004 00:31:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:57.004 00:31:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:57.004 00:31:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:57.262 [2024-04-24 00:31:50.889821] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:57.262 [2024-04-24 00:31:50.890143] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.262 [2024-04-24 00:31:50.998044] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.262 [2024-04-24 00:31:50.998386] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.262 [2024-04-24 00:31:50.999132] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:17:57.262 00:31:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:57.262 00:31:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:57.262 00:31:51 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.262 00:31:51 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:57.520 00:31:51 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:57.520 00:31:51 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:57.520 00:31:51 -- bdev/bdev_raid.sh@287 -- # killprocess 122556 00:17:57.520 00:31:51 -- common/autotest_common.sh@936 -- # '[' -z 122556 ']' 00:17:57.520 00:31:51 -- common/autotest_common.sh@940 -- # kill -0 122556 00:17:57.520 00:31:51 -- common/autotest_common.sh@941 -- # uname 00:17:57.520 00:31:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.520 00:31:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122556 00:17:57.778 killing process with pid 122556 00:17:57.778 00:31:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:57.778 00:31:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:57.778 00:31:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122556' 00:17:57.778 00:31:51 -- common/autotest_common.sh@955 -- # kill 122556 00:17:57.778 00:31:51 -- common/autotest_common.sh@960 -- # wait 122556 00:17:57.778 [2024-04-24 00:31:51.316987] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:57.778 [2024-04-24 00:31:51.317112] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:59.158 ************************************ 00:17:59.158 END TEST raid_state_function_test_sb 00:17:59.158 ************************************ 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:59.158 00:17:59.158 real 0m12.046s 00:17:59.158 user 0m20.250s 00:17:59.158 sys 0m1.665s 00:17:59.158 00:31:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:59.158 00:31:52 -- common/autotest_common.sh@10 -- # set +x 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:17:59.158 00:31:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:59.158 00:31:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:59.158 00:31:52 -- common/autotest_common.sh@10 -- # set +x 00:17:59.158 ************************************ 00:17:59.158 START TEST raid_superblock_test 00:17:59.158 ************************************ 00:17:59.158 00:31:52 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid1 2 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:59.158 00:31:52 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:59.159 00:31:52 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:59.159 00:31:52 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:59.159 00:31:52 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:59.159 00:31:52 -- bdev/bdev_raid.sh@357 -- # raid_pid=122910 00:17:59.159 00:31:52 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:59.159 00:31:52 -- bdev/bdev_raid.sh@358 -- # waitforlisten 122910 /var/tmp/spdk-raid.sock 00:17:59.159 00:31:52 -- common/autotest_common.sh@817 -- # '[' -z 122910 ']' 00:17:59.159 00:31:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:59.159 00:31:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:59.159 00:31:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:59.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:59.159 00:31:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:59.159 00:31:52 -- common/autotest_common.sh@10 -- # set +x 00:17:59.159 [2024-04-24 00:31:52.938468] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:17:59.159 [2024-04-24 00:31:52.938829] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122910 ] 00:17:59.416 [2024-04-24 00:31:53.102833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.674 [2024-04-24 00:31:53.372732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.931 [2024-04-24 00:31:53.619659] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.189 00:31:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:00.189 00:31:53 -- common/autotest_common.sh@850 -- # return 0 00:18:00.189 00:31:53 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:00.189 00:31:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:00.189 00:31:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:00.189 00:31:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:00.189 00:31:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:00.189 00:31:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:00.189 00:31:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:00.189 00:31:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:00.189 00:31:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:00.447 malloc1 00:18:00.447 00:31:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.705 [2024-04-24 00:31:54.482812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.705 [2024-04-24 00:31:54.483142] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.705 [2024-04-24 00:31:54.483224] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:00.705 [2024-04-24 00:31:54.483367] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.705 [2024-04-24 00:31:54.486160] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.705 [2024-04-24 00:31:54.486355] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.705 pt1 00:18:00.963 00:31:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:00.963 00:31:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:00.963 00:31:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:00.963 00:31:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:00.963 00:31:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:00.963 00:31:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:00.963 00:31:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:00.963 00:31:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:00.963 00:31:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:01.221 malloc2 00:18:01.221 00:31:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.479 [2024-04-24 00:31:55.030019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.479 [2024-04-24 00:31:55.030318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.479 [2024-04-24 00:31:55.030463] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:01.479 [2024-04-24 00:31:55.030597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.479 [2024-04-24 00:31:55.033232] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.479 [2024-04-24 00:31:55.033404] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.479 pt2 00:18:01.479 00:31:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:01.479 00:31:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:01.479 00:31:55 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:18:01.736 [2024-04-24 00:31:55.294175] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:01.736 [2024-04-24 00:31:55.296448] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.736 [2024-04-24 00:31:55.296813] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:18:01.736 [2024-04-24 00:31:55.296928] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:01.736 [2024-04-24 00:31:55.297123] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:01.736 [2024-04-24 00:31:55.297569] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:18:01.736 [2024-04-24 00:31:55.297694] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:18:01.736 [2024-04-24 00:31:55.297911] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.736 00:31:55 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:01.736 00:31:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:01.736 00:31:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:01.736 00:31:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:01.736 00:31:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:01.736 00:31:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:01.736 00:31:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.736 00:31:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.736 00:31:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.736 00:31:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.736 00:31:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.736 00:31:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.994 00:31:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.994 "name": "raid_bdev1", 00:18:01.994 "uuid": "a933c4c3-e9da-44bb-abe9-7007a1634d64", 00:18:01.994 "strip_size_kb": 0, 00:18:01.994 "state": "online", 00:18:01.994 "raid_level": "raid1", 00:18:01.994 "superblock": true, 00:18:01.994 "num_base_bdevs": 2, 00:18:01.994 "num_base_bdevs_discovered": 2, 00:18:01.994 "num_base_bdevs_operational": 2, 00:18:01.994 "base_bdevs_list": [ 00:18:01.994 { 00:18:01.994 "name": "pt1", 00:18:01.994 "uuid": "8b17c276-2cde-5d07-8a99-1e09b5f66669", 00:18:01.994 "is_configured": true, 00:18:01.994 "data_offset": 2048, 00:18:01.994 "data_size": 63488 00:18:01.994 }, 00:18:01.994 { 00:18:01.994 "name": "pt2", 00:18:01.994 "uuid": "fc614146-c27f-54e5-9e2a-b2900e94ed57", 00:18:01.994 "is_configured": true, 00:18:01.994 "data_offset": 2048, 00:18:01.994 "data_size": 63488 00:18:01.994 } 00:18:01.994 ] 00:18:01.994 }' 00:18:01.994 00:31:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.994 00:31:55 -- common/autotest_common.sh@10 -- # set +x 00:18:02.559 00:31:56 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:02.559 00:31:56 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:02.817 [2024-04-24 00:31:56.354586] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.817 00:31:56 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=a933c4c3-e9da-44bb-abe9-7007a1634d64 00:18:02.817 00:31:56 -- bdev/bdev_raid.sh@380 -- # '[' -z a933c4c3-e9da-44bb-abe9-7007a1634d64 ']' 00:18:02.817 00:31:56 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:02.817 [2024-04-24 00:31:56.554359] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.817 [2024-04-24 00:31:56.554541] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.817 [2024-04-24 00:31:56.554731] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.817 [2024-04-24 00:31:56.554867] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.817 [2024-04-24 00:31:56.554964] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:18:02.817 00:31:56 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.817 00:31:56 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:03.075 00:31:56 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:03.075 00:31:56 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:03.075 00:31:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:03.075 00:31:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:03.333 00:31:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:03.333 00:31:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:03.613 00:31:57 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:03.613 00:31:57 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:03.870 00:31:57 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:03.870 00:31:57 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:03.870 00:31:57 -- common/autotest_common.sh@638 -- # local es=0 00:18:03.871 00:31:57 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:03.871 00:31:57 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.871 00:31:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:03.871 00:31:57 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.871 00:31:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:03.871 00:31:57 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.871 00:31:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:03.871 00:31:57 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.871 00:31:57 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:03.871 00:31:57 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:04.129 [2024-04-24 00:31:57.838645] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:04.129 [2024-04-24 00:31:57.840977] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:04.129 [2024-04-24 00:31:57.841179] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:04.129 [2024-04-24 00:31:57.841350] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:04.129 [2024-04-24 00:31:57.841419] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.129 [2024-04-24 00:31:57.841559] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:18:04.129 request: 00:18:04.129 { 00:18:04.129 "name": "raid_bdev1", 00:18:04.129 "raid_level": "raid1", 00:18:04.129 "base_bdevs": [ 00:18:04.129 "malloc1", 00:18:04.129 "malloc2" 00:18:04.129 ], 00:18:04.129 "superblock": false, 00:18:04.129 "method": "bdev_raid_create", 00:18:04.129 "req_id": 1 00:18:04.129 } 00:18:04.129 Got JSON-RPC error response 00:18:04.129 response: 00:18:04.129 { 00:18:04.129 "code": -17, 00:18:04.129 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:04.129 } 00:18:04.129 00:31:57 -- common/autotest_common.sh@641 -- # es=1 00:18:04.129 00:31:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:04.129 00:31:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:04.129 00:31:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:04.129 00:31:57 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.129 00:31:57 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:04.387 00:31:58 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:04.387 00:31:58 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:04.387 00:31:58 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:04.645 [2024-04-24 00:31:58.318667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:04.645 [2024-04-24 00:31:58.318957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.645 [2024-04-24 00:31:58.319105] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:04.645 [2024-04-24 00:31:58.319233] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.645 [2024-04-24 00:31:58.321623] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.645 [2024-04-24 00:31:58.321790] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:04.645 [2024-04-24 00:31:58.321979] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:04.645 [2024-04-24 00:31:58.322136] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:04.645 pt1 00:18:04.645 00:31:58 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:04.645 00:31:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:04.645 00:31:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:04.645 00:31:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:04.645 00:31:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:04.645 00:31:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:04.645 00:31:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:04.645 00:31:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:04.645 00:31:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:04.645 00:31:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:04.645 00:31:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.645 00:31:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.903 00:31:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:04.903 "name": "raid_bdev1", 00:18:04.903 "uuid": "a933c4c3-e9da-44bb-abe9-7007a1634d64", 00:18:04.903 "strip_size_kb": 0, 00:18:04.903 "state": "configuring", 00:18:04.903 "raid_level": "raid1", 00:18:04.903 "superblock": true, 00:18:04.903 "num_base_bdevs": 2, 00:18:04.903 "num_base_bdevs_discovered": 1, 00:18:04.903 "num_base_bdevs_operational": 2, 00:18:04.903 "base_bdevs_list": [ 00:18:04.903 { 00:18:04.903 "name": "pt1", 00:18:04.903 "uuid": "8b17c276-2cde-5d07-8a99-1e09b5f66669", 00:18:04.903 "is_configured": true, 00:18:04.903 "data_offset": 2048, 00:18:04.903 "data_size": 63488 00:18:04.903 }, 00:18:04.903 { 00:18:04.903 "name": null, 00:18:04.903 "uuid": "fc614146-c27f-54e5-9e2a-b2900e94ed57", 00:18:04.903 "is_configured": false, 00:18:04.903 "data_offset": 2048, 00:18:04.903 "data_size": 63488 00:18:04.903 } 00:18:04.903 ] 00:18:04.903 }' 00:18:04.903 00:31:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:04.903 00:31:58 -- common/autotest_common.sh@10 -- # set +x 00:18:05.473 00:31:59 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:18:05.473 00:31:59 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:05.473 00:31:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:05.473 00:31:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:05.732 [2024-04-24 00:31:59.494957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:05.732 [2024-04-24 00:31:59.495262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.732 [2024-04-24 00:31:59.495397] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:18:05.732 [2024-04-24 00:31:59.495506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.732 [2024-04-24 00:31:59.496016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.732 [2024-04-24 00:31:59.496174] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:05.732 [2024-04-24 00:31:59.496398] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:05.732 [2024-04-24 00:31:59.496523] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:05.732 [2024-04-24 00:31:59.496689] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:18:05.732 [2024-04-24 00:31:59.496781] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:05.732 [2024-04-24 00:31:59.496995] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:05.732 [2024-04-24 00:31:59.497435] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:18:05.732 [2024-04-24 00:31:59.497562] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:18:05.732 [2024-04-24 00:31:59.497805] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.732 pt2 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.732 00:31:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.298 00:31:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.299 "name": "raid_bdev1", 00:18:06.299 "uuid": "a933c4c3-e9da-44bb-abe9-7007a1634d64", 00:18:06.299 "strip_size_kb": 0, 00:18:06.299 "state": "online", 00:18:06.299 "raid_level": "raid1", 00:18:06.299 "superblock": true, 00:18:06.299 "num_base_bdevs": 2, 00:18:06.299 "num_base_bdevs_discovered": 2, 00:18:06.299 "num_base_bdevs_operational": 2, 00:18:06.299 "base_bdevs_list": [ 00:18:06.299 { 00:18:06.299 "name": "pt1", 00:18:06.299 "uuid": "8b17c276-2cde-5d07-8a99-1e09b5f66669", 00:18:06.299 "is_configured": true, 00:18:06.299 "data_offset": 2048, 00:18:06.299 "data_size": 63488 00:18:06.299 }, 00:18:06.299 { 00:18:06.299 "name": "pt2", 00:18:06.299 "uuid": "fc614146-c27f-54e5-9e2a-b2900e94ed57", 00:18:06.299 "is_configured": true, 00:18:06.299 "data_offset": 2048, 00:18:06.299 "data_size": 63488 00:18:06.299 } 00:18:06.299 ] 00:18:06.299 }' 00:18:06.299 00:31:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.299 00:31:59 -- common/autotest_common.sh@10 -- # set +x 00:18:06.862 00:32:00 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:06.862 00:32:00 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:07.120 [2024-04-24 00:32:00.675393] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.120 00:32:00 -- bdev/bdev_raid.sh@430 -- # '[' a933c4c3-e9da-44bb-abe9-7007a1634d64 '!=' a933c4c3-e9da-44bb-abe9-7007a1634d64 ']' 00:18:07.120 00:32:00 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:18:07.120 00:32:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:07.120 00:32:00 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:07.120 00:32:00 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:07.381 [2024-04-24 00:32:00.947321] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:07.381 00:32:00 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.381 00:32:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:07.381 00:32:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:07.381 00:32:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:07.381 00:32:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:07.381 00:32:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:07.381 00:32:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:07.381 00:32:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:07.381 00:32:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:07.381 00:32:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:07.381 00:32:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.381 00:32:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.638 00:32:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:07.638 "name": "raid_bdev1", 00:18:07.638 "uuid": "a933c4c3-e9da-44bb-abe9-7007a1634d64", 00:18:07.638 "strip_size_kb": 0, 00:18:07.638 "state": "online", 00:18:07.638 "raid_level": "raid1", 00:18:07.638 "superblock": true, 00:18:07.638 "num_base_bdevs": 2, 00:18:07.638 "num_base_bdevs_discovered": 1, 00:18:07.638 "num_base_bdevs_operational": 1, 00:18:07.638 "base_bdevs_list": [ 00:18:07.638 { 00:18:07.638 "name": null, 00:18:07.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.638 "is_configured": false, 00:18:07.638 "data_offset": 2048, 00:18:07.638 "data_size": 63488 00:18:07.638 }, 00:18:07.638 { 00:18:07.638 "name": "pt2", 00:18:07.638 "uuid": "fc614146-c27f-54e5-9e2a-b2900e94ed57", 00:18:07.638 "is_configured": true, 00:18:07.638 "data_offset": 2048, 00:18:07.638 "data_size": 63488 00:18:07.638 } 00:18:07.638 ] 00:18:07.638 }' 00:18:07.638 00:32:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:07.638 00:32:01 -- common/autotest_common.sh@10 -- # set +x 00:18:08.206 00:32:01 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:08.464 [2024-04-24 00:32:02.131535] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.464 [2024-04-24 00:32:02.131719] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.464 [2024-04-24 00:32:02.131884] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.464 [2024-04-24 00:32:02.132012] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.464 [2024-04-24 00:32:02.132101] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:18:08.464 00:32:02 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.464 00:32:02 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:18:08.722 00:32:02 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:18:08.722 00:32:02 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:18:08.722 00:32:02 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:18:08.722 00:32:02 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:08.722 00:32:02 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:08.978 00:32:02 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:08.978 00:32:02 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:08.978 00:32:02 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:18:08.978 00:32:02 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:08.978 00:32:02 -- bdev/bdev_raid.sh@462 -- # i=1 00:18:08.978 00:32:02 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:09.234 [2024-04-24 00:32:02.979668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:09.234 [2024-04-24 00:32:02.979965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.234 [2024-04-24 00:32:02.980151] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:09.234 [2024-04-24 00:32:02.980264] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.234 [2024-04-24 00:32:02.982723] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.234 [2024-04-24 00:32:02.982890] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:09.234 [2024-04-24 00:32:02.983130] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:09.234 [2024-04-24 00:32:02.983277] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:09.234 [2024-04-24 00:32:02.983425] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:18:09.234 [2024-04-24 00:32:02.983513] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:09.234 [2024-04-24 00:32:02.983706] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:18:09.234 [2024-04-24 00:32:02.984101] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:18:09.235 [2024-04-24 00:32:02.984209] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:18:09.235 [2024-04-24 00:32:02.984474] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.235 pt2 00:18:09.235 00:32:02 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.235 00:32:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:09.235 00:32:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:09.235 00:32:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:09.235 00:32:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:09.235 00:32:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:09.235 00:32:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.235 00:32:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.235 00:32:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.235 00:32:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.235 00:32:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.235 00:32:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.493 00:32:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:09.493 "name": "raid_bdev1", 00:18:09.493 "uuid": "a933c4c3-e9da-44bb-abe9-7007a1634d64", 00:18:09.493 "strip_size_kb": 0, 00:18:09.493 "state": "online", 00:18:09.493 "raid_level": "raid1", 00:18:09.493 "superblock": true, 00:18:09.493 "num_base_bdevs": 2, 00:18:09.493 "num_base_bdevs_discovered": 1, 00:18:09.493 "num_base_bdevs_operational": 1, 00:18:09.493 "base_bdevs_list": [ 00:18:09.493 { 00:18:09.493 "name": null, 00:18:09.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.493 "is_configured": false, 00:18:09.493 "data_offset": 2048, 00:18:09.493 "data_size": 63488 00:18:09.493 }, 00:18:09.493 { 00:18:09.493 "name": "pt2", 00:18:09.493 "uuid": "fc614146-c27f-54e5-9e2a-b2900e94ed57", 00:18:09.493 "is_configured": true, 00:18:09.493 "data_offset": 2048, 00:18:09.493 "data_size": 63488 00:18:09.493 } 00:18:09.493 ] 00:18:09.493 }' 00:18:09.493 00:32:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:09.493 00:32:03 -- common/autotest_common.sh@10 -- # set +x 00:18:10.060 00:32:03 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:18:10.060 00:32:03 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:10.060 00:32:03 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:10.381 [2024-04-24 00:32:03.968907] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.381 00:32:03 -- bdev/bdev_raid.sh@506 -- # '[' a933c4c3-e9da-44bb-abe9-7007a1634d64 '!=' a933c4c3-e9da-44bb-abe9-7007a1634d64 ']' 00:18:10.381 00:32:03 -- bdev/bdev_raid.sh@511 -- # killprocess 122910 00:18:10.381 00:32:03 -- common/autotest_common.sh@936 -- # '[' -z 122910 ']' 00:18:10.381 00:32:03 -- common/autotest_common.sh@940 -- # kill -0 122910 00:18:10.381 00:32:03 -- common/autotest_common.sh@941 -- # uname 00:18:10.381 00:32:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:10.381 00:32:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122910 00:18:10.381 killing process with pid 122910 00:18:10.381 00:32:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:10.381 00:32:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:10.381 00:32:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122910' 00:18:10.381 00:32:04 -- common/autotest_common.sh@955 -- # kill 122910 00:18:10.381 [2024-04-24 00:32:04.019374] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:10.381 00:32:04 -- common/autotest_common.sh@960 -- # wait 122910 00:18:10.381 [2024-04-24 00:32:04.019448] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.381 [2024-04-24 00:32:04.019493] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.381 [2024-04-24 00:32:04.019503] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:18:10.640 [2024-04-24 00:32:04.225136] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:12.015 ************************************ 00:18:12.015 END TEST raid_superblock_test 00:18:12.015 ************************************ 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:12.015 00:18:12.015 real 0m12.721s 00:18:12.015 user 0m21.961s 00:18:12.015 sys 0m1.737s 00:18:12.015 00:32:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:12.015 00:32:05 -- common/autotest_common.sh@10 -- # set +x 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:18:12.015 00:32:05 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:12.015 00:32:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:12.015 00:32:05 -- common/autotest_common.sh@10 -- # set +x 00:18:12.015 ************************************ 00:18:12.015 START TEST raid_state_function_test 00:18:12.015 ************************************ 00:18:12.015 00:32:05 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 3 false 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@226 -- # raid_pid=123283 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123283' 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:12.015 Process raid pid: 123283 00:18:12.015 00:32:05 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123283 /var/tmp/spdk-raid.sock 00:18:12.015 00:32:05 -- common/autotest_common.sh@817 -- # '[' -z 123283 ']' 00:18:12.015 00:32:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:12.015 00:32:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:12.015 00:32:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:12.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:12.015 00:32:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:12.015 00:32:05 -- common/autotest_common.sh@10 -- # set +x 00:18:12.015 [2024-04-24 00:32:05.778154] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:18:12.015 [2024-04-24 00:32:05.778568] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.273 [2024-04-24 00:32:05.959976] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.531 [2024-04-24 00:32:06.231136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.789 [2024-04-24 00:32:06.461386] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.047 00:32:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:13.047 00:32:06 -- common/autotest_common.sh@850 -- # return 0 00:18:13.047 00:32:06 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:13.305 [2024-04-24 00:32:07.043081] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:13.305 [2024-04-24 00:32:07.043376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:13.305 [2024-04-24 00:32:07.043527] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.305 [2024-04-24 00:32:07.043588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.305 [2024-04-24 00:32:07.043793] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:13.305 [2024-04-24 00:32:07.043878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:13.305 00:32:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:13.305 00:32:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:13.305 00:32:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:13.305 00:32:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:13.305 00:32:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:13.305 00:32:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:13.305 00:32:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.305 00:32:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.305 00:32:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.305 00:32:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.305 00:32:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.305 00:32:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.563 00:32:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:13.563 "name": "Existed_Raid", 00:18:13.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.563 "strip_size_kb": 64, 00:18:13.563 "state": "configuring", 00:18:13.563 "raid_level": "raid0", 00:18:13.563 "superblock": false, 00:18:13.563 "num_base_bdevs": 3, 00:18:13.563 "num_base_bdevs_discovered": 0, 00:18:13.563 "num_base_bdevs_operational": 3, 00:18:13.563 "base_bdevs_list": [ 00:18:13.563 { 00:18:13.563 "name": "BaseBdev1", 00:18:13.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.563 "is_configured": false, 00:18:13.563 "data_offset": 0, 00:18:13.563 "data_size": 0 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "name": "BaseBdev2", 00:18:13.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.563 "is_configured": false, 00:18:13.563 "data_offset": 0, 00:18:13.563 "data_size": 0 00:18:13.563 }, 00:18:13.563 { 00:18:13.563 "name": "BaseBdev3", 00:18:13.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.563 "is_configured": false, 00:18:13.563 "data_offset": 0, 00:18:13.563 "data_size": 0 00:18:13.563 } 00:18:13.563 ] 00:18:13.563 }' 00:18:13.563 00:32:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:13.563 00:32:07 -- common/autotest_common.sh@10 -- # set +x 00:18:14.498 00:32:07 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:14.756 [2024-04-24 00:32:08.311293] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:14.756 [2024-04-24 00:32:08.311543] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:18:14.756 00:32:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:15.014 [2024-04-24 00:32:08.595347] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:15.014 [2024-04-24 00:32:08.595674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:15.015 [2024-04-24 00:32:08.595887] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.015 [2024-04-24 00:32:08.595947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.015 [2024-04-24 00:32:08.595978] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:15.015 [2024-04-24 00:32:08.596027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:15.015 00:32:08 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:15.273 [2024-04-24 00:32:08.935633] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.273 BaseBdev1 00:18:15.273 00:32:08 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:15.273 00:32:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:15.273 00:32:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:15.273 00:32:08 -- common/autotest_common.sh@887 -- # local i 00:18:15.273 00:32:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:15.273 00:32:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:15.273 00:32:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:15.531 00:32:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:15.816 [ 00:18:15.816 { 00:18:15.816 "name": "BaseBdev1", 00:18:15.816 "aliases": [ 00:18:15.816 "5aacbdb2-30d8-4cde-a10b-5d059043322c" 00:18:15.816 ], 00:18:15.816 "product_name": "Malloc disk", 00:18:15.816 "block_size": 512, 00:18:15.816 "num_blocks": 65536, 00:18:15.816 "uuid": "5aacbdb2-30d8-4cde-a10b-5d059043322c", 00:18:15.816 "assigned_rate_limits": { 00:18:15.816 "rw_ios_per_sec": 0, 00:18:15.816 "rw_mbytes_per_sec": 0, 00:18:15.816 "r_mbytes_per_sec": 0, 00:18:15.816 "w_mbytes_per_sec": 0 00:18:15.816 }, 00:18:15.816 "claimed": true, 00:18:15.816 "claim_type": "exclusive_write", 00:18:15.816 "zoned": false, 00:18:15.816 "supported_io_types": { 00:18:15.816 "read": true, 00:18:15.816 "write": true, 00:18:15.816 "unmap": true, 00:18:15.816 "write_zeroes": true, 00:18:15.816 "flush": true, 00:18:15.816 "reset": true, 00:18:15.816 "compare": false, 00:18:15.816 "compare_and_write": false, 00:18:15.816 "abort": true, 00:18:15.816 "nvme_admin": false, 00:18:15.816 "nvme_io": false 00:18:15.816 }, 00:18:15.816 "memory_domains": [ 00:18:15.816 { 00:18:15.816 "dma_device_id": "system", 00:18:15.816 "dma_device_type": 1 00:18:15.816 }, 00:18:15.816 { 00:18:15.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.817 "dma_device_type": 2 00:18:15.817 } 00:18:15.817 ], 00:18:15.817 "driver_specific": {} 00:18:15.817 } 00:18:15.817 ] 00:18:15.817 00:32:09 -- common/autotest_common.sh@893 -- # return 0 00:18:15.817 00:32:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:15.817 00:32:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:15.817 00:32:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:15.817 00:32:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:15.817 00:32:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:15.817 00:32:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:15.817 00:32:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.817 00:32:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.817 00:32:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.817 00:32:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.817 00:32:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.817 00:32:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.082 00:32:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:16.082 "name": "Existed_Raid", 00:18:16.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.082 "strip_size_kb": 64, 00:18:16.082 "state": "configuring", 00:18:16.082 "raid_level": "raid0", 00:18:16.082 "superblock": false, 00:18:16.082 "num_base_bdevs": 3, 00:18:16.082 "num_base_bdevs_discovered": 1, 00:18:16.082 "num_base_bdevs_operational": 3, 00:18:16.082 "base_bdevs_list": [ 00:18:16.082 { 00:18:16.082 "name": "BaseBdev1", 00:18:16.082 "uuid": "5aacbdb2-30d8-4cde-a10b-5d059043322c", 00:18:16.082 "is_configured": true, 00:18:16.082 "data_offset": 0, 00:18:16.082 "data_size": 65536 00:18:16.082 }, 00:18:16.082 { 00:18:16.082 "name": "BaseBdev2", 00:18:16.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.082 "is_configured": false, 00:18:16.082 "data_offset": 0, 00:18:16.082 "data_size": 0 00:18:16.082 }, 00:18:16.082 { 00:18:16.082 "name": "BaseBdev3", 00:18:16.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.082 "is_configured": false, 00:18:16.082 "data_offset": 0, 00:18:16.082 "data_size": 0 00:18:16.082 } 00:18:16.082 ] 00:18:16.082 }' 00:18:16.082 00:32:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:16.082 00:32:09 -- common/autotest_common.sh@10 -- # set +x 00:18:16.648 00:32:10 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:16.905 [2024-04-24 00:32:10.648074] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.905 [2024-04-24 00:32:10.648336] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:18:16.905 00:32:10 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:16.906 00:32:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:17.164 [2024-04-24 00:32:10.844130] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:17.164 [2024-04-24 00:32:10.846482] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.164 [2024-04-24 00:32:10.846673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.164 [2024-04-24 00:32:10.846765] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:17.164 [2024-04-24 00:32:10.846827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.164 00:32:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.422 00:32:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:17.422 "name": "Existed_Raid", 00:18:17.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.422 "strip_size_kb": 64, 00:18:17.422 "state": "configuring", 00:18:17.422 "raid_level": "raid0", 00:18:17.422 "superblock": false, 00:18:17.422 "num_base_bdevs": 3, 00:18:17.422 "num_base_bdevs_discovered": 1, 00:18:17.422 "num_base_bdevs_operational": 3, 00:18:17.422 "base_bdevs_list": [ 00:18:17.422 { 00:18:17.422 "name": "BaseBdev1", 00:18:17.422 "uuid": "5aacbdb2-30d8-4cde-a10b-5d059043322c", 00:18:17.422 "is_configured": true, 00:18:17.422 "data_offset": 0, 00:18:17.422 "data_size": 65536 00:18:17.422 }, 00:18:17.422 { 00:18:17.422 "name": "BaseBdev2", 00:18:17.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.422 "is_configured": false, 00:18:17.422 "data_offset": 0, 00:18:17.422 "data_size": 0 00:18:17.422 }, 00:18:17.422 { 00:18:17.422 "name": "BaseBdev3", 00:18:17.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.422 "is_configured": false, 00:18:17.422 "data_offset": 0, 00:18:17.422 "data_size": 0 00:18:17.422 } 00:18:17.423 ] 00:18:17.423 }' 00:18:17.423 00:32:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:17.423 00:32:11 -- common/autotest_common.sh@10 -- # set +x 00:18:18.014 00:32:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:18.273 [2024-04-24 00:32:11.985570] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.273 BaseBdev2 00:18:18.273 00:32:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:18.273 00:32:12 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:18:18.273 00:32:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:18.273 00:32:12 -- common/autotest_common.sh@887 -- # local i 00:18:18.273 00:32:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:18.273 00:32:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:18.273 00:32:12 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:18.840 00:32:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:18.840 [ 00:18:18.840 { 00:18:18.840 "name": "BaseBdev2", 00:18:18.840 "aliases": [ 00:18:18.840 "69b090d9-8282-4e59-9a3e-b0e222fc8240" 00:18:18.840 ], 00:18:18.840 "product_name": "Malloc disk", 00:18:18.840 "block_size": 512, 00:18:18.840 "num_blocks": 65536, 00:18:18.840 "uuid": "69b090d9-8282-4e59-9a3e-b0e222fc8240", 00:18:18.840 "assigned_rate_limits": { 00:18:18.840 "rw_ios_per_sec": 0, 00:18:18.840 "rw_mbytes_per_sec": 0, 00:18:18.840 "r_mbytes_per_sec": 0, 00:18:18.840 "w_mbytes_per_sec": 0 00:18:18.840 }, 00:18:18.840 "claimed": true, 00:18:18.840 "claim_type": "exclusive_write", 00:18:18.840 "zoned": false, 00:18:18.840 "supported_io_types": { 00:18:18.840 "read": true, 00:18:18.840 "write": true, 00:18:18.840 "unmap": true, 00:18:18.840 "write_zeroes": true, 00:18:18.840 "flush": true, 00:18:18.840 "reset": true, 00:18:18.840 "compare": false, 00:18:18.840 "compare_and_write": false, 00:18:18.840 "abort": true, 00:18:18.840 "nvme_admin": false, 00:18:18.840 "nvme_io": false 00:18:18.840 }, 00:18:18.840 "memory_domains": [ 00:18:18.840 { 00:18:18.840 "dma_device_id": "system", 00:18:18.840 "dma_device_type": 1 00:18:18.840 }, 00:18:18.840 { 00:18:18.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.840 "dma_device_type": 2 00:18:18.840 } 00:18:18.840 ], 00:18:18.840 "driver_specific": {} 00:18:18.840 } 00:18:18.840 ] 00:18:18.840 00:32:12 -- common/autotest_common.sh@893 -- # return 0 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.840 00:32:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.440 00:32:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:19.440 "name": "Existed_Raid", 00:18:19.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.440 "strip_size_kb": 64, 00:18:19.440 "state": "configuring", 00:18:19.440 "raid_level": "raid0", 00:18:19.440 "superblock": false, 00:18:19.440 "num_base_bdevs": 3, 00:18:19.440 "num_base_bdevs_discovered": 2, 00:18:19.440 "num_base_bdevs_operational": 3, 00:18:19.440 "base_bdevs_list": [ 00:18:19.440 { 00:18:19.440 "name": "BaseBdev1", 00:18:19.440 "uuid": "5aacbdb2-30d8-4cde-a10b-5d059043322c", 00:18:19.440 "is_configured": true, 00:18:19.440 "data_offset": 0, 00:18:19.440 "data_size": 65536 00:18:19.440 }, 00:18:19.440 { 00:18:19.440 "name": "BaseBdev2", 00:18:19.440 "uuid": "69b090d9-8282-4e59-9a3e-b0e222fc8240", 00:18:19.440 "is_configured": true, 00:18:19.440 "data_offset": 0, 00:18:19.440 "data_size": 65536 00:18:19.440 }, 00:18:19.440 { 00:18:19.440 "name": "BaseBdev3", 00:18:19.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.440 "is_configured": false, 00:18:19.440 "data_offset": 0, 00:18:19.440 "data_size": 0 00:18:19.440 } 00:18:19.440 ] 00:18:19.440 }' 00:18:19.440 00:32:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:19.440 00:32:12 -- common/autotest_common.sh@10 -- # set +x 00:18:20.006 00:32:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:20.006 [2024-04-24 00:32:13.772680] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:20.006 [2024-04-24 00:32:13.772940] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:18:20.006 [2024-04-24 00:32:13.772985] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:20.006 [2024-04-24 00:32:13.773211] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:20.006 [2024-04-24 00:32:13.773653] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:18:20.006 [2024-04-24 00:32:13.773786] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:18:20.006 [2024-04-24 00:32:13.774156] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.006 BaseBdev3 00:18:20.265 00:32:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:20.265 00:32:13 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:18:20.265 00:32:13 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:20.265 00:32:13 -- common/autotest_common.sh@887 -- # local i 00:18:20.265 00:32:13 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:20.265 00:32:13 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:20.265 00:32:13 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:20.524 00:32:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:20.791 [ 00:18:20.791 { 00:18:20.791 "name": "BaseBdev3", 00:18:20.791 "aliases": [ 00:18:20.791 "edf19c66-b902-460c-bb83-372a44196f85" 00:18:20.791 ], 00:18:20.791 "product_name": "Malloc disk", 00:18:20.791 "block_size": 512, 00:18:20.791 "num_blocks": 65536, 00:18:20.791 "uuid": "edf19c66-b902-460c-bb83-372a44196f85", 00:18:20.791 "assigned_rate_limits": { 00:18:20.791 "rw_ios_per_sec": 0, 00:18:20.791 "rw_mbytes_per_sec": 0, 00:18:20.791 "r_mbytes_per_sec": 0, 00:18:20.791 "w_mbytes_per_sec": 0 00:18:20.791 }, 00:18:20.791 "claimed": true, 00:18:20.791 "claim_type": "exclusive_write", 00:18:20.791 "zoned": false, 00:18:20.791 "supported_io_types": { 00:18:20.791 "read": true, 00:18:20.791 "write": true, 00:18:20.791 "unmap": true, 00:18:20.791 "write_zeroes": true, 00:18:20.791 "flush": true, 00:18:20.791 "reset": true, 00:18:20.791 "compare": false, 00:18:20.791 "compare_and_write": false, 00:18:20.791 "abort": true, 00:18:20.791 "nvme_admin": false, 00:18:20.791 "nvme_io": false 00:18:20.791 }, 00:18:20.791 "memory_domains": [ 00:18:20.791 { 00:18:20.791 "dma_device_id": "system", 00:18:20.791 "dma_device_type": 1 00:18:20.791 }, 00:18:20.791 { 00:18:20.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.791 "dma_device_type": 2 00:18:20.791 } 00:18:20.791 ], 00:18:20.791 "driver_specific": {} 00:18:20.791 } 00:18:20.791 ] 00:18:20.791 00:32:14 -- common/autotest_common.sh@893 -- # return 0 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.791 00:32:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.791 "name": "Existed_Raid", 00:18:20.791 "uuid": "c4144f31-f92e-464a-a219-4e7f5d72481d", 00:18:20.791 "strip_size_kb": 64, 00:18:20.791 "state": "online", 00:18:20.791 "raid_level": "raid0", 00:18:20.791 "superblock": false, 00:18:20.791 "num_base_bdevs": 3, 00:18:20.791 "num_base_bdevs_discovered": 3, 00:18:20.791 "num_base_bdevs_operational": 3, 00:18:20.791 "base_bdevs_list": [ 00:18:20.791 { 00:18:20.791 "name": "BaseBdev1", 00:18:20.791 "uuid": "5aacbdb2-30d8-4cde-a10b-5d059043322c", 00:18:20.791 "is_configured": true, 00:18:20.791 "data_offset": 0, 00:18:20.791 "data_size": 65536 00:18:20.791 }, 00:18:20.791 { 00:18:20.791 "name": "BaseBdev2", 00:18:20.791 "uuid": "69b090d9-8282-4e59-9a3e-b0e222fc8240", 00:18:20.791 "is_configured": true, 00:18:20.791 "data_offset": 0, 00:18:20.791 "data_size": 65536 00:18:20.791 }, 00:18:20.791 { 00:18:20.791 "name": "BaseBdev3", 00:18:20.791 "uuid": "edf19c66-b902-460c-bb83-372a44196f85", 00:18:20.791 "is_configured": true, 00:18:20.791 "data_offset": 0, 00:18:20.791 "data_size": 65536 00:18:20.792 } 00:18:20.792 ] 00:18:20.792 }' 00:18:20.792 00:32:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.792 00:32:14 -- common/autotest_common.sh@10 -- # set +x 00:18:21.785 00:32:15 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:21.785 [2024-04-24 00:32:15.469210] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:21.785 [2024-04-24 00:32:15.469441] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.785 [2024-04-24 00:32:15.469610] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.080 00:32:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.338 00:32:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.338 "name": "Existed_Raid", 00:18:22.338 "uuid": "c4144f31-f92e-464a-a219-4e7f5d72481d", 00:18:22.338 "strip_size_kb": 64, 00:18:22.338 "state": "offline", 00:18:22.338 "raid_level": "raid0", 00:18:22.338 "superblock": false, 00:18:22.338 "num_base_bdevs": 3, 00:18:22.338 "num_base_bdevs_discovered": 2, 00:18:22.338 "num_base_bdevs_operational": 2, 00:18:22.338 "base_bdevs_list": [ 00:18:22.338 { 00:18:22.338 "name": null, 00:18:22.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.338 "is_configured": false, 00:18:22.338 "data_offset": 0, 00:18:22.338 "data_size": 65536 00:18:22.338 }, 00:18:22.338 { 00:18:22.338 "name": "BaseBdev2", 00:18:22.339 "uuid": "69b090d9-8282-4e59-9a3e-b0e222fc8240", 00:18:22.339 "is_configured": true, 00:18:22.339 "data_offset": 0, 00:18:22.339 "data_size": 65536 00:18:22.339 }, 00:18:22.339 { 00:18:22.339 "name": "BaseBdev3", 00:18:22.339 "uuid": "edf19c66-b902-460c-bb83-372a44196f85", 00:18:22.339 "is_configured": true, 00:18:22.339 "data_offset": 0, 00:18:22.339 "data_size": 65536 00:18:22.339 } 00:18:22.339 ] 00:18:22.339 }' 00:18:22.339 00:32:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.339 00:32:15 -- common/autotest_common.sh@10 -- # set +x 00:18:22.904 00:32:16 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:22.904 00:32:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:22.904 00:32:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.904 00:32:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:23.162 00:32:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:23.162 00:32:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:23.162 00:32:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:23.420 [2024-04-24 00:32:17.011533] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:23.420 00:32:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:23.420 00:32:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:23.420 00:32:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.420 00:32:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:23.677 00:32:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:23.677 00:32:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:23.677 00:32:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:23.934 [2024-04-24 00:32:17.651694] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:23.934 [2024-04-24 00:32:17.651902] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:18:24.193 00:32:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:24.193 00:32:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:24.193 00:32:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.193 00:32:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:24.451 00:32:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:24.451 00:32:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:24.451 00:32:18 -- bdev/bdev_raid.sh@287 -- # killprocess 123283 00:18:24.451 00:32:18 -- common/autotest_common.sh@936 -- # '[' -z 123283 ']' 00:18:24.451 00:32:18 -- common/autotest_common.sh@940 -- # kill -0 123283 00:18:24.451 00:32:18 -- common/autotest_common.sh@941 -- # uname 00:18:24.451 00:32:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:24.451 00:32:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123283 00:18:24.451 00:32:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:24.451 00:32:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:24.451 killing process with pid 123283 00:18:24.451 00:32:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123283' 00:18:24.451 00:32:18 -- common/autotest_common.sh@955 -- # kill 123283 00:18:24.451 00:32:18 -- common/autotest_common.sh@960 -- # wait 123283 00:18:24.451 [2024-04-24 00:32:18.093926] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.451 [2024-04-24 00:32:18.094057] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.900 ************************************ 00:18:25.900 END TEST raid_state_function_test 00:18:25.900 ************************************ 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:25.900 00:18:25.900 real 0m13.824s 00:18:25.900 user 0m23.601s 00:18:25.900 sys 0m1.965s 00:18:25.900 00:32:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:25.900 00:32:19 -- common/autotest_common.sh@10 -- # set +x 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:18:25.900 00:32:19 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:25.900 00:32:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:25.900 00:32:19 -- common/autotest_common.sh@10 -- # set +x 00:18:25.900 ************************************ 00:18:25.900 START TEST raid_state_function_test_sb 00:18:25.900 ************************************ 00:18:25.900 00:32:19 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 3 true 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@226 -- # raid_pid=123677 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123677' 00:18:25.900 Process raid pid: 123677 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123677 /var/tmp/spdk-raid.sock 00:18:25.900 00:32:19 -- common/autotest_common.sh@817 -- # '[' -z 123677 ']' 00:18:25.900 00:32:19 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:25.900 00:32:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:25.900 00:32:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:25.900 00:32:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:25.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:25.900 00:32:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:25.900 00:32:19 -- common/autotest_common.sh@10 -- # set +x 00:18:26.160 [2024-04-24 00:32:19.711903] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:18:26.160 [2024-04-24 00:32:19.712301] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.160 [2024-04-24 00:32:19.894586] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.420 [2024-04-24 00:32:20.163575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.678 [2024-04-24 00:32:20.381286] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.936 00:32:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:26.936 00:32:20 -- common/autotest_common.sh@850 -- # return 0 00:18:26.936 00:32:20 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:27.217 [2024-04-24 00:32:20.919672] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:27.217 [2024-04-24 00:32:20.920031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:27.217 [2024-04-24 00:32:20.920131] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.217 [2024-04-24 00:32:20.920188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.217 [2024-04-24 00:32:20.920219] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:27.217 [2024-04-24 00:32:20.920364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:27.217 00:32:20 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:27.217 00:32:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:27.217 00:32:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:27.217 00:32:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:27.217 00:32:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:27.217 00:32:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:27.217 00:32:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.217 00:32:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.217 00:32:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.217 00:32:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.217 00:32:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.217 00:32:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.474 00:32:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.474 "name": "Existed_Raid", 00:18:27.474 "uuid": "3d5eac76-4716-4af1-b255-6a2a82bdd5d6", 00:18:27.474 "strip_size_kb": 64, 00:18:27.474 "state": "configuring", 00:18:27.474 "raid_level": "raid0", 00:18:27.474 "superblock": true, 00:18:27.474 "num_base_bdevs": 3, 00:18:27.474 "num_base_bdevs_discovered": 0, 00:18:27.474 "num_base_bdevs_operational": 3, 00:18:27.474 "base_bdevs_list": [ 00:18:27.474 { 00:18:27.474 "name": "BaseBdev1", 00:18:27.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.474 "is_configured": false, 00:18:27.474 "data_offset": 0, 00:18:27.474 "data_size": 0 00:18:27.474 }, 00:18:27.474 { 00:18:27.474 "name": "BaseBdev2", 00:18:27.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.474 "is_configured": false, 00:18:27.474 "data_offset": 0, 00:18:27.474 "data_size": 0 00:18:27.474 }, 00:18:27.474 { 00:18:27.474 "name": "BaseBdev3", 00:18:27.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.474 "is_configured": false, 00:18:27.474 "data_offset": 0, 00:18:27.474 "data_size": 0 00:18:27.474 } 00:18:27.474 ] 00:18:27.474 }' 00:18:27.474 00:32:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.474 00:32:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.408 00:32:21 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:28.408 [2024-04-24 00:32:22.087776] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:28.408 [2024-04-24 00:32:22.088005] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:18:28.408 00:32:22 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:28.699 [2024-04-24 00:32:22.391843] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:28.699 [2024-04-24 00:32:22.392084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:28.699 [2024-04-24 00:32:22.392189] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:28.699 [2024-04-24 00:32:22.392243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:28.699 [2024-04-24 00:32:22.392273] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:28.699 [2024-04-24 00:32:22.392378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:28.699 00:32:22 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:28.958 [2024-04-24 00:32:22.738899] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.958 BaseBdev1 00:18:29.219 00:32:22 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:29.219 00:32:22 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:29.219 00:32:22 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:29.219 00:32:22 -- common/autotest_common.sh@887 -- # local i 00:18:29.219 00:32:22 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:29.219 00:32:22 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:29.219 00:32:22 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:29.479 00:32:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:29.479 [ 00:18:29.479 { 00:18:29.479 "name": "BaseBdev1", 00:18:29.479 "aliases": [ 00:18:29.479 "899fabb8-e595-43a8-baf2-c067ab26b4dd" 00:18:29.479 ], 00:18:29.479 "product_name": "Malloc disk", 00:18:29.479 "block_size": 512, 00:18:29.479 "num_blocks": 65536, 00:18:29.479 "uuid": "899fabb8-e595-43a8-baf2-c067ab26b4dd", 00:18:29.479 "assigned_rate_limits": { 00:18:29.479 "rw_ios_per_sec": 0, 00:18:29.479 "rw_mbytes_per_sec": 0, 00:18:29.479 "r_mbytes_per_sec": 0, 00:18:29.479 "w_mbytes_per_sec": 0 00:18:29.479 }, 00:18:29.479 "claimed": true, 00:18:29.479 "claim_type": "exclusive_write", 00:18:29.479 "zoned": false, 00:18:29.479 "supported_io_types": { 00:18:29.479 "read": true, 00:18:29.479 "write": true, 00:18:29.479 "unmap": true, 00:18:29.479 "write_zeroes": true, 00:18:29.479 "flush": true, 00:18:29.479 "reset": true, 00:18:29.479 "compare": false, 00:18:29.479 "compare_and_write": false, 00:18:29.479 "abort": true, 00:18:29.479 "nvme_admin": false, 00:18:29.479 "nvme_io": false 00:18:29.479 }, 00:18:29.479 "memory_domains": [ 00:18:29.479 { 00:18:29.479 "dma_device_id": "system", 00:18:29.479 "dma_device_type": 1 00:18:29.479 }, 00:18:29.479 { 00:18:29.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.479 "dma_device_type": 2 00:18:29.479 } 00:18:29.479 ], 00:18:29.479 "driver_specific": {} 00:18:29.479 } 00:18:29.479 ] 00:18:29.479 00:32:23 -- common/autotest_common.sh@893 -- # return 0 00:18:29.479 00:32:23 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:29.479 00:32:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:29.479 00:32:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:29.479 00:32:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:29.479 00:32:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:29.479 00:32:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:29.479 00:32:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:29.479 00:32:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:29.479 00:32:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:29.479 00:32:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:29.479 00:32:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.479 00:32:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.738 00:32:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.738 "name": "Existed_Raid", 00:18:29.738 "uuid": "76f3115b-a478-4a49-b18a-29006c95e72f", 00:18:29.738 "strip_size_kb": 64, 00:18:29.738 "state": "configuring", 00:18:29.738 "raid_level": "raid0", 00:18:29.738 "superblock": true, 00:18:29.738 "num_base_bdevs": 3, 00:18:29.738 "num_base_bdevs_discovered": 1, 00:18:29.738 "num_base_bdevs_operational": 3, 00:18:29.738 "base_bdevs_list": [ 00:18:29.738 { 00:18:29.738 "name": "BaseBdev1", 00:18:29.738 "uuid": "899fabb8-e595-43a8-baf2-c067ab26b4dd", 00:18:29.738 "is_configured": true, 00:18:29.738 "data_offset": 2048, 00:18:29.738 "data_size": 63488 00:18:29.738 }, 00:18:29.738 { 00:18:29.738 "name": "BaseBdev2", 00:18:29.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.738 "is_configured": false, 00:18:29.738 "data_offset": 0, 00:18:29.738 "data_size": 0 00:18:29.738 }, 00:18:29.738 { 00:18:29.738 "name": "BaseBdev3", 00:18:29.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.738 "is_configured": false, 00:18:29.738 "data_offset": 0, 00:18:29.738 "data_size": 0 00:18:29.738 } 00:18:29.738 ] 00:18:29.738 }' 00:18:29.738 00:32:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.738 00:32:23 -- common/autotest_common.sh@10 -- # set +x 00:18:30.673 00:32:24 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:30.673 [2024-04-24 00:32:24.399310] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:30.673 [2024-04-24 00:32:24.399566] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:18:30.673 00:32:24 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:30.673 00:32:24 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:31.240 00:32:24 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:31.240 BaseBdev1 00:18:31.498 00:32:25 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:31.498 00:32:25 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:31.498 00:32:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:31.498 00:32:25 -- common/autotest_common.sh@887 -- # local i 00:18:31.498 00:32:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:31.498 00:32:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:31.498 00:32:25 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:31.756 00:32:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:31.756 [ 00:18:31.756 { 00:18:31.756 "name": "BaseBdev1", 00:18:31.756 "aliases": [ 00:18:31.756 "69cea2f0-9e52-492a-af50-de368101be71" 00:18:31.756 ], 00:18:31.756 "product_name": "Malloc disk", 00:18:31.756 "block_size": 512, 00:18:31.756 "num_blocks": 65536, 00:18:31.756 "uuid": "69cea2f0-9e52-492a-af50-de368101be71", 00:18:31.756 "assigned_rate_limits": { 00:18:31.756 "rw_ios_per_sec": 0, 00:18:31.756 "rw_mbytes_per_sec": 0, 00:18:31.756 "r_mbytes_per_sec": 0, 00:18:31.756 "w_mbytes_per_sec": 0 00:18:31.756 }, 00:18:31.756 "claimed": false, 00:18:31.756 "zoned": false, 00:18:31.756 "supported_io_types": { 00:18:31.756 "read": true, 00:18:31.756 "write": true, 00:18:31.756 "unmap": true, 00:18:31.756 "write_zeroes": true, 00:18:31.756 "flush": true, 00:18:31.756 "reset": true, 00:18:31.756 "compare": false, 00:18:31.756 "compare_and_write": false, 00:18:31.756 "abort": true, 00:18:31.756 "nvme_admin": false, 00:18:31.756 "nvme_io": false 00:18:31.756 }, 00:18:31.756 "memory_domains": [ 00:18:31.756 { 00:18:31.756 "dma_device_id": "system", 00:18:31.756 "dma_device_type": 1 00:18:31.756 }, 00:18:31.756 { 00:18:31.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.756 "dma_device_type": 2 00:18:31.756 } 00:18:31.756 ], 00:18:31.756 "driver_specific": {} 00:18:31.756 } 00:18:31.756 ] 00:18:31.756 00:32:25 -- common/autotest_common.sh@893 -- # return 0 00:18:31.756 00:32:25 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:32.014 [2024-04-24 00:32:25.720496] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:32.014 [2024-04-24 00:32:25.723012] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:32.014 [2024-04-24 00:32:25.723195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:32.014 [2024-04-24 00:32:25.723288] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:32.014 [2024-04-24 00:32:25.723350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.014 00:32:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.272 00:32:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:32.272 "name": "Existed_Raid", 00:18:32.272 "uuid": "b43142bf-5e24-47fe-8b96-cf42d9465f02", 00:18:32.272 "strip_size_kb": 64, 00:18:32.272 "state": "configuring", 00:18:32.272 "raid_level": "raid0", 00:18:32.272 "superblock": true, 00:18:32.272 "num_base_bdevs": 3, 00:18:32.272 "num_base_bdevs_discovered": 1, 00:18:32.272 "num_base_bdevs_operational": 3, 00:18:32.272 "base_bdevs_list": [ 00:18:32.272 { 00:18:32.272 "name": "BaseBdev1", 00:18:32.272 "uuid": "69cea2f0-9e52-492a-af50-de368101be71", 00:18:32.272 "is_configured": true, 00:18:32.272 "data_offset": 2048, 00:18:32.272 "data_size": 63488 00:18:32.272 }, 00:18:32.272 { 00:18:32.272 "name": "BaseBdev2", 00:18:32.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.272 "is_configured": false, 00:18:32.272 "data_offset": 0, 00:18:32.272 "data_size": 0 00:18:32.272 }, 00:18:32.272 { 00:18:32.272 "name": "BaseBdev3", 00:18:32.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.272 "is_configured": false, 00:18:32.272 "data_offset": 0, 00:18:32.272 "data_size": 0 00:18:32.272 } 00:18:32.272 ] 00:18:32.272 }' 00:18:32.272 00:32:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:32.272 00:32:25 -- common/autotest_common.sh@10 -- # set +x 00:18:33.208 00:32:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:33.208 [2024-04-24 00:32:26.909131] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.208 BaseBdev2 00:18:33.208 00:32:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:33.208 00:32:26 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:18:33.208 00:32:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:33.208 00:32:26 -- common/autotest_common.sh@887 -- # local i 00:18:33.208 00:32:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:33.208 00:32:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:33.208 00:32:26 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:33.774 00:32:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:33.774 [ 00:18:33.774 { 00:18:33.774 "name": "BaseBdev2", 00:18:33.774 "aliases": [ 00:18:33.774 "60e78a9c-1069-48b7-8e50-e52e636f2904" 00:18:33.774 ], 00:18:33.774 "product_name": "Malloc disk", 00:18:33.774 "block_size": 512, 00:18:33.774 "num_blocks": 65536, 00:18:33.774 "uuid": "60e78a9c-1069-48b7-8e50-e52e636f2904", 00:18:33.774 "assigned_rate_limits": { 00:18:33.774 "rw_ios_per_sec": 0, 00:18:33.774 "rw_mbytes_per_sec": 0, 00:18:33.774 "r_mbytes_per_sec": 0, 00:18:33.774 "w_mbytes_per_sec": 0 00:18:33.774 }, 00:18:33.774 "claimed": true, 00:18:33.774 "claim_type": "exclusive_write", 00:18:33.774 "zoned": false, 00:18:33.774 "supported_io_types": { 00:18:33.774 "read": true, 00:18:33.774 "write": true, 00:18:33.774 "unmap": true, 00:18:33.774 "write_zeroes": true, 00:18:33.774 "flush": true, 00:18:33.774 "reset": true, 00:18:33.774 "compare": false, 00:18:33.774 "compare_and_write": false, 00:18:33.774 "abort": true, 00:18:33.774 "nvme_admin": false, 00:18:33.774 "nvme_io": false 00:18:33.774 }, 00:18:33.774 "memory_domains": [ 00:18:33.774 { 00:18:33.774 "dma_device_id": "system", 00:18:33.774 "dma_device_type": 1 00:18:33.774 }, 00:18:33.774 { 00:18:33.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.774 "dma_device_type": 2 00:18:33.774 } 00:18:33.774 ], 00:18:33.774 "driver_specific": {} 00:18:33.774 } 00:18:33.774 ] 00:18:33.774 00:32:27 -- common/autotest_common.sh@893 -- # return 0 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.774 00:32:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.032 00:32:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:34.032 "name": "Existed_Raid", 00:18:34.032 "uuid": "b43142bf-5e24-47fe-8b96-cf42d9465f02", 00:18:34.032 "strip_size_kb": 64, 00:18:34.032 "state": "configuring", 00:18:34.032 "raid_level": "raid0", 00:18:34.032 "superblock": true, 00:18:34.032 "num_base_bdevs": 3, 00:18:34.032 "num_base_bdevs_discovered": 2, 00:18:34.032 "num_base_bdevs_operational": 3, 00:18:34.032 "base_bdevs_list": [ 00:18:34.032 { 00:18:34.032 "name": "BaseBdev1", 00:18:34.032 "uuid": "69cea2f0-9e52-492a-af50-de368101be71", 00:18:34.032 "is_configured": true, 00:18:34.032 "data_offset": 2048, 00:18:34.032 "data_size": 63488 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "name": "BaseBdev2", 00:18:34.032 "uuid": "60e78a9c-1069-48b7-8e50-e52e636f2904", 00:18:34.032 "is_configured": true, 00:18:34.032 "data_offset": 2048, 00:18:34.032 "data_size": 63488 00:18:34.032 }, 00:18:34.032 { 00:18:34.032 "name": "BaseBdev3", 00:18:34.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.032 "is_configured": false, 00:18:34.032 "data_offset": 0, 00:18:34.032 "data_size": 0 00:18:34.032 } 00:18:34.032 ] 00:18:34.032 }' 00:18:34.032 00:32:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:34.032 00:32:27 -- common/autotest_common.sh@10 -- # set +x 00:18:34.966 00:32:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:35.224 [2024-04-24 00:32:28.773399] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:35.224 [2024-04-24 00:32:28.773856] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:18:35.224 [2024-04-24 00:32:28.773980] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:35.224 [2024-04-24 00:32:28.774152] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:35.224 [2024-04-24 00:32:28.774538] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:18:35.224 [2024-04-24 00:32:28.774590] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:18:35.224 BaseBdev3 00:18:35.224 [2024-04-24 00:32:28.774906] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.224 00:32:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:35.224 00:32:28 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:18:35.224 00:32:28 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:35.224 00:32:28 -- common/autotest_common.sh@887 -- # local i 00:18:35.224 00:32:28 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:35.224 00:32:28 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:35.224 00:32:28 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:35.482 00:32:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:35.740 [ 00:18:35.740 { 00:18:35.740 "name": "BaseBdev3", 00:18:35.740 "aliases": [ 00:18:35.740 "bdc5e90f-55c5-4de2-ad27-9fedfdabc0e1" 00:18:35.740 ], 00:18:35.740 "product_name": "Malloc disk", 00:18:35.740 "block_size": 512, 00:18:35.740 "num_blocks": 65536, 00:18:35.740 "uuid": "bdc5e90f-55c5-4de2-ad27-9fedfdabc0e1", 00:18:35.740 "assigned_rate_limits": { 00:18:35.740 "rw_ios_per_sec": 0, 00:18:35.740 "rw_mbytes_per_sec": 0, 00:18:35.740 "r_mbytes_per_sec": 0, 00:18:35.740 "w_mbytes_per_sec": 0 00:18:35.740 }, 00:18:35.740 "claimed": true, 00:18:35.740 "claim_type": "exclusive_write", 00:18:35.740 "zoned": false, 00:18:35.740 "supported_io_types": { 00:18:35.740 "read": true, 00:18:35.740 "write": true, 00:18:35.740 "unmap": true, 00:18:35.740 "write_zeroes": true, 00:18:35.740 "flush": true, 00:18:35.740 "reset": true, 00:18:35.740 "compare": false, 00:18:35.740 "compare_and_write": false, 00:18:35.740 "abort": true, 00:18:35.740 "nvme_admin": false, 00:18:35.740 "nvme_io": false 00:18:35.740 }, 00:18:35.740 "memory_domains": [ 00:18:35.740 { 00:18:35.740 "dma_device_id": "system", 00:18:35.740 "dma_device_type": 1 00:18:35.740 }, 00:18:35.740 { 00:18:35.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.740 "dma_device_type": 2 00:18:35.740 } 00:18:35.740 ], 00:18:35.740 "driver_specific": {} 00:18:35.740 } 00:18:35.740 ] 00:18:35.740 00:32:29 -- common/autotest_common.sh@893 -- # return 0 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.740 00:32:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.998 00:32:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.998 "name": "Existed_Raid", 00:18:35.998 "uuid": "b43142bf-5e24-47fe-8b96-cf42d9465f02", 00:18:35.998 "strip_size_kb": 64, 00:18:35.998 "state": "online", 00:18:35.998 "raid_level": "raid0", 00:18:35.998 "superblock": true, 00:18:35.998 "num_base_bdevs": 3, 00:18:35.998 "num_base_bdevs_discovered": 3, 00:18:35.998 "num_base_bdevs_operational": 3, 00:18:35.998 "base_bdevs_list": [ 00:18:35.998 { 00:18:35.998 "name": "BaseBdev1", 00:18:35.998 "uuid": "69cea2f0-9e52-492a-af50-de368101be71", 00:18:35.998 "is_configured": true, 00:18:35.998 "data_offset": 2048, 00:18:35.998 "data_size": 63488 00:18:35.998 }, 00:18:35.998 { 00:18:35.998 "name": "BaseBdev2", 00:18:35.998 "uuid": "60e78a9c-1069-48b7-8e50-e52e636f2904", 00:18:35.999 "is_configured": true, 00:18:35.999 "data_offset": 2048, 00:18:35.999 "data_size": 63488 00:18:35.999 }, 00:18:35.999 { 00:18:35.999 "name": "BaseBdev3", 00:18:35.999 "uuid": "bdc5e90f-55c5-4de2-ad27-9fedfdabc0e1", 00:18:35.999 "is_configured": true, 00:18:35.999 "data_offset": 2048, 00:18:35.999 "data_size": 63488 00:18:35.999 } 00:18:35.999 ] 00:18:35.999 }' 00:18:35.999 00:32:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.999 00:32:29 -- common/autotest_common.sh@10 -- # set +x 00:18:36.609 00:32:30 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:36.893 [2024-04-24 00:32:30.573914] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:36.893 [2024-04-24 00:32:30.574122] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.893 [2024-04-24 00:32:30.574331] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.151 00:32:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.409 00:32:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:37.409 "name": "Existed_Raid", 00:18:37.409 "uuid": "b43142bf-5e24-47fe-8b96-cf42d9465f02", 00:18:37.409 "strip_size_kb": 64, 00:18:37.409 "state": "offline", 00:18:37.409 "raid_level": "raid0", 00:18:37.409 "superblock": true, 00:18:37.409 "num_base_bdevs": 3, 00:18:37.409 "num_base_bdevs_discovered": 2, 00:18:37.409 "num_base_bdevs_operational": 2, 00:18:37.409 "base_bdevs_list": [ 00:18:37.409 { 00:18:37.409 "name": null, 00:18:37.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.409 "is_configured": false, 00:18:37.409 "data_offset": 2048, 00:18:37.409 "data_size": 63488 00:18:37.409 }, 00:18:37.409 { 00:18:37.409 "name": "BaseBdev2", 00:18:37.409 "uuid": "60e78a9c-1069-48b7-8e50-e52e636f2904", 00:18:37.409 "is_configured": true, 00:18:37.409 "data_offset": 2048, 00:18:37.409 "data_size": 63488 00:18:37.409 }, 00:18:37.409 { 00:18:37.409 "name": "BaseBdev3", 00:18:37.409 "uuid": "bdc5e90f-55c5-4de2-ad27-9fedfdabc0e1", 00:18:37.409 "is_configured": true, 00:18:37.409 "data_offset": 2048, 00:18:37.409 "data_size": 63488 00:18:37.409 } 00:18:37.409 ] 00:18:37.409 }' 00:18:37.409 00:32:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:37.409 00:32:31 -- common/autotest_common.sh@10 -- # set +x 00:18:38.025 00:32:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:38.025 00:32:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:38.025 00:32:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:38.025 00:32:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.283 00:32:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:38.283 00:32:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:38.283 00:32:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:38.541 [2024-04-24 00:32:32.238404] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:38.799 00:32:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:38.799 00:32:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:38.799 00:32:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.799 00:32:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:39.057 00:32:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:39.057 00:32:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:39.057 00:32:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:39.315 [2024-04-24 00:32:32.862463] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:39.315 [2024-04-24 00:32:32.862721] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:18:39.315 00:32:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:39.315 00:32:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:39.315 00:32:32 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:39.315 00:32:32 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.573 00:32:33 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:39.573 00:32:33 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:39.573 00:32:33 -- bdev/bdev_raid.sh@287 -- # killprocess 123677 00:18:39.573 00:32:33 -- common/autotest_common.sh@936 -- # '[' -z 123677 ']' 00:18:39.573 00:32:33 -- common/autotest_common.sh@940 -- # kill -0 123677 00:18:39.573 00:32:33 -- common/autotest_common.sh@941 -- # uname 00:18:39.573 00:32:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:39.573 00:32:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123677 00:18:39.573 killing process with pid 123677 00:18:39.573 00:32:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:39.573 00:32:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:39.573 00:32:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123677' 00:18:39.573 00:32:33 -- common/autotest_common.sh@955 -- # kill 123677 00:18:39.573 00:32:33 -- common/autotest_common.sh@960 -- # wait 123677 00:18:39.573 [2024-04-24 00:32:33.287058] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:39.573 [2024-04-24 00:32:33.287181] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:40.948 ************************************ 00:18:40.948 END TEST raid_state_function_test_sb 00:18:40.948 ************************************ 00:18:40.948 00:32:34 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:40.948 00:18:40.948 real 0m15.106s 00:18:40.948 user 0m25.951s 00:18:40.948 sys 0m2.047s 00:18:40.948 00:32:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:40.948 00:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:18:41.265 00:32:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:41.265 00:32:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:41.265 00:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:41.265 ************************************ 00:18:41.265 START TEST raid_superblock_test 00:18:41.265 ************************************ 00:18:41.265 00:32:34 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid0 3 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@357 -- # raid_pid=124097 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124097 /var/tmp/spdk-raid.sock 00:18:41.265 00:32:34 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:41.265 00:32:34 -- common/autotest_common.sh@817 -- # '[' -z 124097 ']' 00:18:41.265 00:32:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:41.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:41.265 00:32:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:41.265 00:32:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:41.265 00:32:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:41.265 00:32:34 -- common/autotest_common.sh@10 -- # set +x 00:18:41.265 [2024-04-24 00:32:34.907297] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:18:41.265 [2024-04-24 00:32:34.907700] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124097 ] 00:18:41.522 [2024-04-24 00:32:35.096255] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.784 [2024-04-24 00:32:35.328596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.784 [2024-04-24 00:32:35.563226] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:42.351 00:32:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:42.351 00:32:35 -- common/autotest_common.sh@850 -- # return 0 00:18:42.351 00:32:35 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:42.351 00:32:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:42.351 00:32:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:42.351 00:32:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:42.351 00:32:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:42.351 00:32:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:42.351 00:32:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:42.351 00:32:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:42.351 00:32:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:42.351 malloc1 00:18:42.351 00:32:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:42.610 [2024-04-24 00:32:36.332528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:42.610 [2024-04-24 00:32:36.332910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.610 [2024-04-24 00:32:36.333077] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:42.610 [2024-04-24 00:32:36.333240] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.610 [2024-04-24 00:32:36.336278] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.610 [2024-04-24 00:32:36.336533] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:42.610 pt1 00:18:42.610 00:32:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:42.610 00:32:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:42.610 00:32:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:42.610 00:32:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:42.610 00:32:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:42.610 00:32:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:42.610 00:32:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:42.610 00:32:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:42.610 00:32:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:42.868 malloc2 00:18:42.868 00:32:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:43.126 [2024-04-24 00:32:36.854121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:43.126 [2024-04-24 00:32:36.854492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.126 [2024-04-24 00:32:36.854678] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:43.126 [2024-04-24 00:32:36.854888] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.126 [2024-04-24 00:32:36.857665] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.126 [2024-04-24 00:32:36.857893] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:43.126 pt2 00:18:43.126 00:32:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:43.126 00:32:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:43.126 00:32:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:43.126 00:32:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:43.126 00:32:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:43.126 00:32:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:43.126 00:32:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:43.126 00:32:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:43.126 00:32:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:43.384 malloc3 00:18:43.659 00:32:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:43.919 [2024-04-24 00:32:37.477176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:43.919 [2024-04-24 00:32:37.477515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.919 [2024-04-24 00:32:37.477692] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:43.919 [2024-04-24 00:32:37.477842] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.919 [2024-04-24 00:32:37.480480] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.919 [2024-04-24 00:32:37.480732] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:43.919 pt3 00:18:43.919 00:32:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:43.919 00:32:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:43.919 00:32:37 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:43.919 [2024-04-24 00:32:37.709312] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:44.178 [2024-04-24 00:32:37.711824] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:44.178 [2024-04-24 00:32:37.712073] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:44.178 [2024-04-24 00:32:37.712446] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:18:44.178 [2024-04-24 00:32:37.712599] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:44.178 [2024-04-24 00:32:37.712835] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:44.178 [2024-04-24 00:32:37.713345] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:18:44.178 [2024-04-24 00:32:37.713484] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:18:44.178 [2024-04-24 00:32:37.713830] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:44.178 "name": "raid_bdev1", 00:18:44.178 "uuid": "8468823e-e5ab-4e04-8e9c-8e15a7db9665", 00:18:44.178 "strip_size_kb": 64, 00:18:44.178 "state": "online", 00:18:44.178 "raid_level": "raid0", 00:18:44.178 "superblock": true, 00:18:44.178 "num_base_bdevs": 3, 00:18:44.178 "num_base_bdevs_discovered": 3, 00:18:44.178 "num_base_bdevs_operational": 3, 00:18:44.178 "base_bdevs_list": [ 00:18:44.178 { 00:18:44.178 "name": "pt1", 00:18:44.178 "uuid": "081e1a55-91c6-5fca-b23a-c4394ae74247", 00:18:44.178 "is_configured": true, 00:18:44.178 "data_offset": 2048, 00:18:44.178 "data_size": 63488 00:18:44.178 }, 00:18:44.178 { 00:18:44.178 "name": "pt2", 00:18:44.178 "uuid": "b57fdbfa-7f4f-535d-ab89-eaf9d739a443", 00:18:44.178 "is_configured": true, 00:18:44.178 "data_offset": 2048, 00:18:44.178 "data_size": 63488 00:18:44.178 }, 00:18:44.178 { 00:18:44.178 "name": "pt3", 00:18:44.178 "uuid": "2467655c-6b68-5394-9347-19adf3b1fc15", 00:18:44.178 "is_configured": true, 00:18:44.178 "data_offset": 2048, 00:18:44.178 "data_size": 63488 00:18:44.178 } 00:18:44.178 ] 00:18:44.178 }' 00:18:44.178 00:32:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:44.178 00:32:37 -- common/autotest_common.sh@10 -- # set +x 00:18:45.111 00:32:38 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:45.111 00:32:38 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:45.111 [2024-04-24 00:32:38.754329] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.111 00:32:38 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=8468823e-e5ab-4e04-8e9c-8e15a7db9665 00:18:45.111 00:32:38 -- bdev/bdev_raid.sh@380 -- # '[' -z 8468823e-e5ab-4e04-8e9c-8e15a7db9665 ']' 00:18:45.111 00:32:38 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:45.369 [2024-04-24 00:32:38.978066] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.369 [2024-04-24 00:32:38.978311] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.369 [2024-04-24 00:32:38.978490] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.369 [2024-04-24 00:32:38.978681] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.369 [2024-04-24 00:32:38.978773] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:18:45.369 00:32:38 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.369 00:32:38 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:45.627 00:32:39 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:45.627 00:32:39 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:45.627 00:32:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:45.627 00:32:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:45.885 00:32:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:45.885 00:32:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:46.143 00:32:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:46.143 00:32:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:46.143 00:32:39 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:46.143 00:32:39 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:46.409 00:32:40 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:46.409 00:32:40 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:46.409 00:32:40 -- common/autotest_common.sh@638 -- # local es=0 00:18:46.409 00:32:40 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:46.409 00:32:40 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:46.409 00:32:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:46.409 00:32:40 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:46.409 00:32:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:46.409 00:32:40 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:46.409 00:32:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:46.409 00:32:40 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:46.409 00:32:40 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:46.409 00:32:40 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:46.668 [2024-04-24 00:32:40.410398] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:46.668 [2024-04-24 00:32:40.413033] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:46.668 [2024-04-24 00:32:40.413263] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:46.668 [2024-04-24 00:32:40.413359] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:46.668 [2024-04-24 00:32:40.413608] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:46.668 [2024-04-24 00:32:40.413768] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:46.668 [2024-04-24 00:32:40.413900] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.668 [2024-04-24 00:32:40.413988] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:18:46.668 request: 00:18:46.668 { 00:18:46.668 "name": "raid_bdev1", 00:18:46.668 "raid_level": "raid0", 00:18:46.668 "base_bdevs": [ 00:18:46.668 "malloc1", 00:18:46.668 "malloc2", 00:18:46.668 "malloc3" 00:18:46.668 ], 00:18:46.668 "superblock": false, 00:18:46.668 "strip_size_kb": 64, 00:18:46.668 "method": "bdev_raid_create", 00:18:46.668 "req_id": 1 00:18:46.668 } 00:18:46.668 Got JSON-RPC error response 00:18:46.668 response: 00:18:46.668 { 00:18:46.668 "code": -17, 00:18:46.668 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:46.668 } 00:18:46.668 00:32:40 -- common/autotest_common.sh@641 -- # es=1 00:18:46.668 00:32:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:46.668 00:32:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:46.668 00:32:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:46.668 00:32:40 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.668 00:32:40 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:46.924 00:32:40 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:46.924 00:32:40 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:46.924 00:32:40 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:47.180 [2024-04-24 00:32:40.914403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:47.180 [2024-04-24 00:32:40.914725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.180 [2024-04-24 00:32:40.914824] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:47.180 [2024-04-24 00:32:40.914964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.180 [2024-04-24 00:32:40.917675] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.180 [2024-04-24 00:32:40.917888] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:47.180 [2024-04-24 00:32:40.918128] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:47.180 [2024-04-24 00:32:40.918264] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:47.180 pt1 00:18:47.181 00:32:40 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:18:47.181 00:32:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:47.181 00:32:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:47.181 00:32:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:47.181 00:32:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:47.181 00:32:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:47.181 00:32:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.181 00:32:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.181 00:32:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.181 00:32:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.181 00:32:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.181 00:32:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.438 00:32:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.438 "name": "raid_bdev1", 00:18:47.438 "uuid": "8468823e-e5ab-4e04-8e9c-8e15a7db9665", 00:18:47.438 "strip_size_kb": 64, 00:18:47.438 "state": "configuring", 00:18:47.438 "raid_level": "raid0", 00:18:47.438 "superblock": true, 00:18:47.438 "num_base_bdevs": 3, 00:18:47.438 "num_base_bdevs_discovered": 1, 00:18:47.438 "num_base_bdevs_operational": 3, 00:18:47.438 "base_bdevs_list": [ 00:18:47.438 { 00:18:47.438 "name": "pt1", 00:18:47.438 "uuid": "081e1a55-91c6-5fca-b23a-c4394ae74247", 00:18:47.438 "is_configured": true, 00:18:47.438 "data_offset": 2048, 00:18:47.438 "data_size": 63488 00:18:47.438 }, 00:18:47.438 { 00:18:47.438 "name": null, 00:18:47.438 "uuid": "b57fdbfa-7f4f-535d-ab89-eaf9d739a443", 00:18:47.438 "is_configured": false, 00:18:47.438 "data_offset": 2048, 00:18:47.438 "data_size": 63488 00:18:47.438 }, 00:18:47.438 { 00:18:47.438 "name": null, 00:18:47.438 "uuid": "2467655c-6b68-5394-9347-19adf3b1fc15", 00:18:47.438 "is_configured": false, 00:18:47.438 "data_offset": 2048, 00:18:47.438 "data_size": 63488 00:18:47.438 } 00:18:47.438 ] 00:18:47.438 }' 00:18:47.438 00:32:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.438 00:32:41 -- common/autotest_common.sh@10 -- # set +x 00:18:48.371 00:32:41 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:18:48.371 00:32:41 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:48.371 [2024-04-24 00:32:42.114861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:48.371 [2024-04-24 00:32:42.115409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.371 [2024-04-24 00:32:42.115585] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:48.371 [2024-04-24 00:32:42.115702] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.371 [2024-04-24 00:32:42.116251] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.371 [2024-04-24 00:32:42.116339] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:48.371 [2024-04-24 00:32:42.116517] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:48.371 [2024-04-24 00:32:42.116678] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:48.371 pt2 00:18:48.371 00:32:42 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:48.628 [2024-04-24 00:32:42.342998] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:48.628 00:32:42 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:18:48.628 00:32:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:48.628 00:32:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:48.628 00:32:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:48.628 00:32:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:48.628 00:32:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:48.628 00:32:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.628 00:32:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.628 00:32:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.628 00:32:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.628 00:32:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.628 00:32:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.886 00:32:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.886 "name": "raid_bdev1", 00:18:48.886 "uuid": "8468823e-e5ab-4e04-8e9c-8e15a7db9665", 00:18:48.886 "strip_size_kb": 64, 00:18:48.886 "state": "configuring", 00:18:48.886 "raid_level": "raid0", 00:18:48.886 "superblock": true, 00:18:48.886 "num_base_bdevs": 3, 00:18:48.886 "num_base_bdevs_discovered": 1, 00:18:48.886 "num_base_bdevs_operational": 3, 00:18:48.886 "base_bdevs_list": [ 00:18:48.886 { 00:18:48.886 "name": "pt1", 00:18:48.886 "uuid": "081e1a55-91c6-5fca-b23a-c4394ae74247", 00:18:48.886 "is_configured": true, 00:18:48.886 "data_offset": 2048, 00:18:48.886 "data_size": 63488 00:18:48.886 }, 00:18:48.886 { 00:18:48.886 "name": null, 00:18:48.886 "uuid": "b57fdbfa-7f4f-535d-ab89-eaf9d739a443", 00:18:48.886 "is_configured": false, 00:18:48.886 "data_offset": 2048, 00:18:48.886 "data_size": 63488 00:18:48.886 }, 00:18:48.886 { 00:18:48.886 "name": null, 00:18:48.886 "uuid": "2467655c-6b68-5394-9347-19adf3b1fc15", 00:18:48.886 "is_configured": false, 00:18:48.886 "data_offset": 2048, 00:18:48.886 "data_size": 63488 00:18:48.886 } 00:18:48.886 ] 00:18:48.886 }' 00:18:48.886 00:32:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.886 00:32:42 -- common/autotest_common.sh@10 -- # set +x 00:18:49.491 00:32:43 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:49.491 00:32:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:49.491 00:32:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:49.752 [2024-04-24 00:32:43.451229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:49.752 [2024-04-24 00:32:43.451531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.752 [2024-04-24 00:32:43.451612] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:49.752 [2024-04-24 00:32:43.451891] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.752 [2024-04-24 00:32:43.452432] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.752 [2024-04-24 00:32:43.452613] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:49.752 [2024-04-24 00:32:43.452896] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:49.752 [2024-04-24 00:32:43.453020] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:49.752 pt2 00:18:49.752 00:32:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:49.752 00:32:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:49.752 00:32:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:50.009 [2024-04-24 00:32:43.671256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:50.009 [2024-04-24 00:32:43.671524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.009 [2024-04-24 00:32:43.671613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:50.009 [2024-04-24 00:32:43.671877] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.009 [2024-04-24 00:32:43.672413] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.009 [2024-04-24 00:32:43.672595] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:50.009 [2024-04-24 00:32:43.672858] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:50.009 [2024-04-24 00:32:43.672974] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:50.009 [2024-04-24 00:32:43.673151] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:18:50.009 [2024-04-24 00:32:43.673242] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:50.009 [2024-04-24 00:32:43.673400] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:50.009 [2024-04-24 00:32:43.673765] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:18:50.009 [2024-04-24 00:32:43.673879] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:18:50.009 [2024-04-24 00:32:43.674135] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.009 pt3 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.009 00:32:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.267 00:32:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.267 "name": "raid_bdev1", 00:18:50.267 "uuid": "8468823e-e5ab-4e04-8e9c-8e15a7db9665", 00:18:50.267 "strip_size_kb": 64, 00:18:50.267 "state": "online", 00:18:50.267 "raid_level": "raid0", 00:18:50.268 "superblock": true, 00:18:50.268 "num_base_bdevs": 3, 00:18:50.268 "num_base_bdevs_discovered": 3, 00:18:50.268 "num_base_bdevs_operational": 3, 00:18:50.268 "base_bdevs_list": [ 00:18:50.268 { 00:18:50.268 "name": "pt1", 00:18:50.268 "uuid": "081e1a55-91c6-5fca-b23a-c4394ae74247", 00:18:50.268 "is_configured": true, 00:18:50.268 "data_offset": 2048, 00:18:50.268 "data_size": 63488 00:18:50.268 }, 00:18:50.268 { 00:18:50.268 "name": "pt2", 00:18:50.268 "uuid": "b57fdbfa-7f4f-535d-ab89-eaf9d739a443", 00:18:50.268 "is_configured": true, 00:18:50.268 "data_offset": 2048, 00:18:50.268 "data_size": 63488 00:18:50.268 }, 00:18:50.268 { 00:18:50.268 "name": "pt3", 00:18:50.268 "uuid": "2467655c-6b68-5394-9347-19adf3b1fc15", 00:18:50.268 "is_configured": true, 00:18:50.268 "data_offset": 2048, 00:18:50.268 "data_size": 63488 00:18:50.268 } 00:18:50.268 ] 00:18:50.268 }' 00:18:50.268 00:32:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.268 00:32:43 -- common/autotest_common.sh@10 -- # set +x 00:18:50.835 00:32:44 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:50.835 00:32:44 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:51.093 [2024-04-24 00:32:44.787772] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.093 00:32:44 -- bdev/bdev_raid.sh@430 -- # '[' 8468823e-e5ab-4e04-8e9c-8e15a7db9665 '!=' 8468823e-e5ab-4e04-8e9c-8e15a7db9665 ']' 00:18:51.093 00:32:44 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:18:51.093 00:32:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:51.093 00:32:44 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:51.093 00:32:44 -- bdev/bdev_raid.sh@511 -- # killprocess 124097 00:18:51.093 00:32:44 -- common/autotest_common.sh@936 -- # '[' -z 124097 ']' 00:18:51.093 00:32:44 -- common/autotest_common.sh@940 -- # kill -0 124097 00:18:51.093 00:32:44 -- common/autotest_common.sh@941 -- # uname 00:18:51.093 00:32:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:51.093 00:32:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124097 00:18:51.093 00:32:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:51.093 00:32:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:51.093 00:32:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124097' 00:18:51.093 killing process with pid 124097 00:18:51.093 00:32:44 -- common/autotest_common.sh@955 -- # kill 124097 00:18:51.093 [2024-04-24 00:32:44.841575] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:51.093 00:32:44 -- common/autotest_common.sh@960 -- # wait 124097 00:18:51.093 [2024-04-24 00:32:44.841841] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.093 [2024-04-24 00:32:44.842032] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.093 [2024-04-24 00:32:44.842114] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:18:51.660 [2024-04-24 00:32:45.167230] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:53.036 ************************************ 00:18:53.036 END TEST raid_superblock_test 00:18:53.036 ************************************ 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:53.036 00:18:53.036 real 0m11.764s 00:18:53.036 user 0m19.695s 00:18:53.036 sys 0m1.673s 00:18:53.036 00:32:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:53.036 00:32:46 -- common/autotest_common.sh@10 -- # set +x 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:18:53.036 00:32:46 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:53.036 00:32:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:53.036 00:32:46 -- common/autotest_common.sh@10 -- # set +x 00:18:53.036 ************************************ 00:18:53.036 START TEST raid_state_function_test 00:18:53.036 ************************************ 00:18:53.036 00:32:46 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 3 false 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=124425 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124425' 00:18:53.036 Process raid pid: 124425 00:18:53.036 00:32:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124425 /var/tmp/spdk-raid.sock 00:18:53.036 00:32:46 -- common/autotest_common.sh@817 -- # '[' -z 124425 ']' 00:18:53.036 00:32:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:53.036 00:32:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:53.036 00:32:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:53.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:53.036 00:32:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:53.036 00:32:46 -- common/autotest_common.sh@10 -- # set +x 00:18:53.036 [2024-04-24 00:32:46.781348] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:18:53.037 [2024-04-24 00:32:46.781729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.295 [2024-04-24 00:32:46.953666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.553 [2024-04-24 00:32:47.237084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.811 [2024-04-24 00:32:47.485699] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.070 00:32:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:54.070 00:32:47 -- common/autotest_common.sh@850 -- # return 0 00:18:54.070 00:32:47 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:54.328 [2024-04-24 00:32:48.028398] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:54.328 [2024-04-24 00:32:48.028760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:54.328 [2024-04-24 00:32:48.028903] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:54.328 [2024-04-24 00:32:48.028975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:54.328 [2024-04-24 00:32:48.029087] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:54.328 [2024-04-24 00:32:48.029190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:54.328 00:32:48 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:54.328 00:32:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:54.328 00:32:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:54.328 00:32:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:54.328 00:32:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:54.328 00:32:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:54.328 00:32:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:54.328 00:32:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:54.328 00:32:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:54.328 00:32:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:54.328 00:32:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.328 00:32:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.587 00:32:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:54.587 "name": "Existed_Raid", 00:18:54.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.588 "strip_size_kb": 64, 00:18:54.588 "state": "configuring", 00:18:54.588 "raid_level": "concat", 00:18:54.588 "superblock": false, 00:18:54.588 "num_base_bdevs": 3, 00:18:54.588 "num_base_bdevs_discovered": 0, 00:18:54.588 "num_base_bdevs_operational": 3, 00:18:54.588 "base_bdevs_list": [ 00:18:54.588 { 00:18:54.588 "name": "BaseBdev1", 00:18:54.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.588 "is_configured": false, 00:18:54.588 "data_offset": 0, 00:18:54.588 "data_size": 0 00:18:54.588 }, 00:18:54.588 { 00:18:54.588 "name": "BaseBdev2", 00:18:54.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.588 "is_configured": false, 00:18:54.588 "data_offset": 0, 00:18:54.588 "data_size": 0 00:18:54.588 }, 00:18:54.588 { 00:18:54.588 "name": "BaseBdev3", 00:18:54.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.588 "is_configured": false, 00:18:54.588 "data_offset": 0, 00:18:54.588 "data_size": 0 00:18:54.588 } 00:18:54.588 ] 00:18:54.588 }' 00:18:54.588 00:32:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:54.588 00:32:48 -- common/autotest_common.sh@10 -- # set +x 00:18:55.521 00:32:48 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:55.521 [2024-04-24 00:32:49.212537] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:55.521 [2024-04-24 00:32:49.212867] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:18:55.521 00:32:49 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:55.778 [2024-04-24 00:32:49.516608] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:55.778 [2024-04-24 00:32:49.516989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:55.778 [2024-04-24 00:32:49.517127] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:55.778 [2024-04-24 00:32:49.517265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:55.778 [2024-04-24 00:32:49.517368] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:55.778 [2024-04-24 00:32:49.517502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:55.778 00:32:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:56.035 [2024-04-24 00:32:49.796204] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:56.035 BaseBdev1 00:18:56.035 00:32:49 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:56.035 00:32:49 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:56.035 00:32:49 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:56.035 00:32:49 -- common/autotest_common.sh@887 -- # local i 00:18:56.035 00:32:49 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:56.035 00:32:49 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:56.035 00:32:49 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:56.600 00:32:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:56.858 [ 00:18:56.858 { 00:18:56.858 "name": "BaseBdev1", 00:18:56.858 "aliases": [ 00:18:56.858 "40c2fb53-a500-47b7-b6b8-7a23c5f3c89b" 00:18:56.858 ], 00:18:56.858 "product_name": "Malloc disk", 00:18:56.858 "block_size": 512, 00:18:56.858 "num_blocks": 65536, 00:18:56.858 "uuid": "40c2fb53-a500-47b7-b6b8-7a23c5f3c89b", 00:18:56.858 "assigned_rate_limits": { 00:18:56.858 "rw_ios_per_sec": 0, 00:18:56.858 "rw_mbytes_per_sec": 0, 00:18:56.858 "r_mbytes_per_sec": 0, 00:18:56.858 "w_mbytes_per_sec": 0 00:18:56.858 }, 00:18:56.858 "claimed": true, 00:18:56.858 "claim_type": "exclusive_write", 00:18:56.858 "zoned": false, 00:18:56.858 "supported_io_types": { 00:18:56.858 "read": true, 00:18:56.858 "write": true, 00:18:56.858 "unmap": true, 00:18:56.858 "write_zeroes": true, 00:18:56.858 "flush": true, 00:18:56.858 "reset": true, 00:18:56.858 "compare": false, 00:18:56.858 "compare_and_write": false, 00:18:56.858 "abort": true, 00:18:56.858 "nvme_admin": false, 00:18:56.858 "nvme_io": false 00:18:56.858 }, 00:18:56.858 "memory_domains": [ 00:18:56.858 { 00:18:56.858 "dma_device_id": "system", 00:18:56.858 "dma_device_type": 1 00:18:56.858 }, 00:18:56.858 { 00:18:56.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.858 "dma_device_type": 2 00:18:56.858 } 00:18:56.858 ], 00:18:56.858 "driver_specific": {} 00:18:56.858 } 00:18:56.858 ] 00:18:56.858 00:32:50 -- common/autotest_common.sh@893 -- # return 0 00:18:56.858 00:32:50 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:56.858 00:32:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:56.858 00:32:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:56.858 00:32:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:56.858 00:32:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:56.858 00:32:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:56.858 00:32:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:56.858 00:32:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:56.858 00:32:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:56.858 00:32:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:56.858 00:32:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.858 00:32:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.114 00:32:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:57.114 "name": "Existed_Raid", 00:18:57.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.114 "strip_size_kb": 64, 00:18:57.114 "state": "configuring", 00:18:57.114 "raid_level": "concat", 00:18:57.114 "superblock": false, 00:18:57.114 "num_base_bdevs": 3, 00:18:57.114 "num_base_bdevs_discovered": 1, 00:18:57.114 "num_base_bdevs_operational": 3, 00:18:57.114 "base_bdevs_list": [ 00:18:57.114 { 00:18:57.114 "name": "BaseBdev1", 00:18:57.114 "uuid": "40c2fb53-a500-47b7-b6b8-7a23c5f3c89b", 00:18:57.114 "is_configured": true, 00:18:57.114 "data_offset": 0, 00:18:57.114 "data_size": 65536 00:18:57.114 }, 00:18:57.114 { 00:18:57.114 "name": "BaseBdev2", 00:18:57.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.114 "is_configured": false, 00:18:57.114 "data_offset": 0, 00:18:57.114 "data_size": 0 00:18:57.114 }, 00:18:57.114 { 00:18:57.114 "name": "BaseBdev3", 00:18:57.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.114 "is_configured": false, 00:18:57.114 "data_offset": 0, 00:18:57.114 "data_size": 0 00:18:57.114 } 00:18:57.114 ] 00:18:57.114 }' 00:18:57.114 00:32:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:57.114 00:32:50 -- common/autotest_common.sh@10 -- # set +x 00:18:57.681 00:32:51 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:57.681 [2024-04-24 00:32:51.468692] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:57.681 [2024-04-24 00:32:51.469029] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:57.941 [2024-04-24 00:32:51.688809] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.941 [2024-04-24 00:32:51.691344] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.941 [2024-04-24 00:32:51.691614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.941 [2024-04-24 00:32:51.691724] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:57.941 [2024-04-24 00:32:51.691791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.941 00:32:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.199 00:32:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:58.199 "name": "Existed_Raid", 00:18:58.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.199 "strip_size_kb": 64, 00:18:58.199 "state": "configuring", 00:18:58.199 "raid_level": "concat", 00:18:58.199 "superblock": false, 00:18:58.200 "num_base_bdevs": 3, 00:18:58.200 "num_base_bdevs_discovered": 1, 00:18:58.200 "num_base_bdevs_operational": 3, 00:18:58.200 "base_bdevs_list": [ 00:18:58.200 { 00:18:58.200 "name": "BaseBdev1", 00:18:58.200 "uuid": "40c2fb53-a500-47b7-b6b8-7a23c5f3c89b", 00:18:58.200 "is_configured": true, 00:18:58.200 "data_offset": 0, 00:18:58.200 "data_size": 65536 00:18:58.200 }, 00:18:58.200 { 00:18:58.200 "name": "BaseBdev2", 00:18:58.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.200 "is_configured": false, 00:18:58.200 "data_offset": 0, 00:18:58.200 "data_size": 0 00:18:58.200 }, 00:18:58.200 { 00:18:58.200 "name": "BaseBdev3", 00:18:58.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.200 "is_configured": false, 00:18:58.200 "data_offset": 0, 00:18:58.200 "data_size": 0 00:18:58.200 } 00:18:58.200 ] 00:18:58.200 }' 00:18:58.200 00:32:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:58.200 00:32:51 -- common/autotest_common.sh@10 -- # set +x 00:18:59.133 00:32:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:59.133 [2024-04-24 00:32:52.909117] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:59.133 BaseBdev2 00:18:59.420 00:32:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:59.420 00:32:52 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:18:59.420 00:32:52 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:59.420 00:32:52 -- common/autotest_common.sh@887 -- # local i 00:18:59.420 00:32:52 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:59.420 00:32:52 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:59.420 00:32:52 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:59.420 00:32:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:59.677 [ 00:18:59.677 { 00:18:59.677 "name": "BaseBdev2", 00:18:59.677 "aliases": [ 00:18:59.677 "6ba9921d-b9a3-4f7c-95cb-53039c8efc66" 00:18:59.677 ], 00:18:59.677 "product_name": "Malloc disk", 00:18:59.678 "block_size": 512, 00:18:59.678 "num_blocks": 65536, 00:18:59.678 "uuid": "6ba9921d-b9a3-4f7c-95cb-53039c8efc66", 00:18:59.678 "assigned_rate_limits": { 00:18:59.678 "rw_ios_per_sec": 0, 00:18:59.678 "rw_mbytes_per_sec": 0, 00:18:59.678 "r_mbytes_per_sec": 0, 00:18:59.678 "w_mbytes_per_sec": 0 00:18:59.678 }, 00:18:59.678 "claimed": true, 00:18:59.678 "claim_type": "exclusive_write", 00:18:59.678 "zoned": false, 00:18:59.678 "supported_io_types": { 00:18:59.678 "read": true, 00:18:59.678 "write": true, 00:18:59.678 "unmap": true, 00:18:59.678 "write_zeroes": true, 00:18:59.678 "flush": true, 00:18:59.678 "reset": true, 00:18:59.678 "compare": false, 00:18:59.678 "compare_and_write": false, 00:18:59.678 "abort": true, 00:18:59.678 "nvme_admin": false, 00:18:59.678 "nvme_io": false 00:18:59.678 }, 00:18:59.678 "memory_domains": [ 00:18:59.678 { 00:18:59.678 "dma_device_id": "system", 00:18:59.678 "dma_device_type": 1 00:18:59.678 }, 00:18:59.678 { 00:18:59.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.678 "dma_device_type": 2 00:18:59.678 } 00:18:59.678 ], 00:18:59.678 "driver_specific": {} 00:18:59.678 } 00:18:59.678 ] 00:18:59.678 00:32:53 -- common/autotest_common.sh@893 -- # return 0 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.678 00:32:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.936 00:32:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:59.936 "name": "Existed_Raid", 00:18:59.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.936 "strip_size_kb": 64, 00:18:59.936 "state": "configuring", 00:18:59.936 "raid_level": "concat", 00:18:59.936 "superblock": false, 00:18:59.936 "num_base_bdevs": 3, 00:18:59.936 "num_base_bdevs_discovered": 2, 00:18:59.936 "num_base_bdevs_operational": 3, 00:18:59.936 "base_bdevs_list": [ 00:18:59.936 { 00:18:59.936 "name": "BaseBdev1", 00:18:59.936 "uuid": "40c2fb53-a500-47b7-b6b8-7a23c5f3c89b", 00:18:59.936 "is_configured": true, 00:18:59.936 "data_offset": 0, 00:18:59.936 "data_size": 65536 00:18:59.936 }, 00:18:59.936 { 00:18:59.936 "name": "BaseBdev2", 00:18:59.936 "uuid": "6ba9921d-b9a3-4f7c-95cb-53039c8efc66", 00:18:59.936 "is_configured": true, 00:18:59.936 "data_offset": 0, 00:18:59.936 "data_size": 65536 00:18:59.936 }, 00:18:59.936 { 00:18:59.936 "name": "BaseBdev3", 00:18:59.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.936 "is_configured": false, 00:18:59.936 "data_offset": 0, 00:18:59.936 "data_size": 0 00:18:59.936 } 00:18:59.936 ] 00:18:59.936 }' 00:18:59.936 00:32:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:59.936 00:32:53 -- common/autotest_common.sh@10 -- # set +x 00:19:00.502 00:32:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:00.760 [2024-04-24 00:32:54.523124] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:00.760 [2024-04-24 00:32:54.523457] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:19:00.760 [2024-04-24 00:32:54.523505] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:00.760 [2024-04-24 00:32:54.523781] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:00.760 [2024-04-24 00:32:54.524268] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:19:00.760 [2024-04-24 00:32:54.524390] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:19:00.760 [2024-04-24 00:32:54.524790] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.760 BaseBdev3 00:19:01.018 00:32:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:01.018 00:32:54 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:19:01.018 00:32:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:01.018 00:32:54 -- common/autotest_common.sh@887 -- # local i 00:19:01.018 00:32:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:01.018 00:32:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:01.018 00:32:54 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:01.018 00:32:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:01.276 [ 00:19:01.276 { 00:19:01.276 "name": "BaseBdev3", 00:19:01.276 "aliases": [ 00:19:01.276 "8b8a37c4-cb34-4d49-9b3b-a7dd5e8f0cd9" 00:19:01.276 ], 00:19:01.276 "product_name": "Malloc disk", 00:19:01.276 "block_size": 512, 00:19:01.276 "num_blocks": 65536, 00:19:01.276 "uuid": "8b8a37c4-cb34-4d49-9b3b-a7dd5e8f0cd9", 00:19:01.276 "assigned_rate_limits": { 00:19:01.276 "rw_ios_per_sec": 0, 00:19:01.276 "rw_mbytes_per_sec": 0, 00:19:01.276 "r_mbytes_per_sec": 0, 00:19:01.276 "w_mbytes_per_sec": 0 00:19:01.276 }, 00:19:01.276 "claimed": true, 00:19:01.276 "claim_type": "exclusive_write", 00:19:01.276 "zoned": false, 00:19:01.276 "supported_io_types": { 00:19:01.276 "read": true, 00:19:01.276 "write": true, 00:19:01.276 "unmap": true, 00:19:01.276 "write_zeroes": true, 00:19:01.276 "flush": true, 00:19:01.276 "reset": true, 00:19:01.276 "compare": false, 00:19:01.276 "compare_and_write": false, 00:19:01.276 "abort": true, 00:19:01.276 "nvme_admin": false, 00:19:01.276 "nvme_io": false 00:19:01.276 }, 00:19:01.276 "memory_domains": [ 00:19:01.276 { 00:19:01.276 "dma_device_id": "system", 00:19:01.276 "dma_device_type": 1 00:19:01.276 }, 00:19:01.276 { 00:19:01.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.276 "dma_device_type": 2 00:19:01.276 } 00:19:01.276 ], 00:19:01.276 "driver_specific": {} 00:19:01.276 } 00:19:01.276 ] 00:19:01.276 00:32:55 -- common/autotest_common.sh@893 -- # return 0 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.276 00:32:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.534 00:32:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:01.534 "name": "Existed_Raid", 00:19:01.534 "uuid": "9a9c3309-6b27-4cf2-b789-0abb17029c92", 00:19:01.534 "strip_size_kb": 64, 00:19:01.534 "state": "online", 00:19:01.534 "raid_level": "concat", 00:19:01.534 "superblock": false, 00:19:01.534 "num_base_bdevs": 3, 00:19:01.534 "num_base_bdevs_discovered": 3, 00:19:01.534 "num_base_bdevs_operational": 3, 00:19:01.534 "base_bdevs_list": [ 00:19:01.534 { 00:19:01.534 "name": "BaseBdev1", 00:19:01.534 "uuid": "40c2fb53-a500-47b7-b6b8-7a23c5f3c89b", 00:19:01.534 "is_configured": true, 00:19:01.534 "data_offset": 0, 00:19:01.534 "data_size": 65536 00:19:01.534 }, 00:19:01.534 { 00:19:01.534 "name": "BaseBdev2", 00:19:01.534 "uuid": "6ba9921d-b9a3-4f7c-95cb-53039c8efc66", 00:19:01.534 "is_configured": true, 00:19:01.534 "data_offset": 0, 00:19:01.534 "data_size": 65536 00:19:01.534 }, 00:19:01.534 { 00:19:01.534 "name": "BaseBdev3", 00:19:01.534 "uuid": "8b8a37c4-cb34-4d49-9b3b-a7dd5e8f0cd9", 00:19:01.534 "is_configured": true, 00:19:01.534 "data_offset": 0, 00:19:01.534 "data_size": 65536 00:19:01.534 } 00:19:01.534 ] 00:19:01.534 }' 00:19:01.534 00:32:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:01.534 00:32:55 -- common/autotest_common.sh@10 -- # set +x 00:19:02.100 00:32:55 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:02.357 [2024-04-24 00:32:56.067650] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:02.357 [2024-04-24 00:32:56.067952] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.357 [2024-04-24 00:32:56.068107] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.615 00:32:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.873 00:32:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.873 "name": "Existed_Raid", 00:19:02.873 "uuid": "9a9c3309-6b27-4cf2-b789-0abb17029c92", 00:19:02.873 "strip_size_kb": 64, 00:19:02.873 "state": "offline", 00:19:02.873 "raid_level": "concat", 00:19:02.873 "superblock": false, 00:19:02.873 "num_base_bdevs": 3, 00:19:02.873 "num_base_bdevs_discovered": 2, 00:19:02.873 "num_base_bdevs_operational": 2, 00:19:02.873 "base_bdevs_list": [ 00:19:02.873 { 00:19:02.873 "name": null, 00:19:02.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.873 "is_configured": false, 00:19:02.873 "data_offset": 0, 00:19:02.873 "data_size": 65536 00:19:02.873 }, 00:19:02.873 { 00:19:02.873 "name": "BaseBdev2", 00:19:02.873 "uuid": "6ba9921d-b9a3-4f7c-95cb-53039c8efc66", 00:19:02.873 "is_configured": true, 00:19:02.873 "data_offset": 0, 00:19:02.873 "data_size": 65536 00:19:02.873 }, 00:19:02.873 { 00:19:02.873 "name": "BaseBdev3", 00:19:02.873 "uuid": "8b8a37c4-cb34-4d49-9b3b-a7dd5e8f0cd9", 00:19:02.873 "is_configured": true, 00:19:02.873 "data_offset": 0, 00:19:02.873 "data_size": 65536 00:19:02.873 } 00:19:02.873 ] 00:19:02.873 }' 00:19:02.873 00:32:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.873 00:32:56 -- common/autotest_common.sh@10 -- # set +x 00:19:03.476 00:32:57 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:03.476 00:32:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:03.476 00:32:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.476 00:32:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:03.734 00:32:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:03.734 00:32:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:03.734 00:32:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:03.992 [2024-04-24 00:32:57.606782] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:03.992 00:32:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:03.992 00:32:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:03.992 00:32:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.992 00:32:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:04.251 00:32:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:04.251 00:32:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:04.251 00:32:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:04.509 [2024-04-24 00:32:58.191518] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:04.509 [2024-04-24 00:32:58.191841] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:19:04.768 00:32:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:04.768 00:32:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:04.768 00:32:58 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.768 00:32:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:05.027 00:32:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:05.027 00:32:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:05.027 00:32:58 -- bdev/bdev_raid.sh@287 -- # killprocess 124425 00:19:05.027 00:32:58 -- common/autotest_common.sh@936 -- # '[' -z 124425 ']' 00:19:05.027 00:32:58 -- common/autotest_common.sh@940 -- # kill -0 124425 00:19:05.027 00:32:58 -- common/autotest_common.sh@941 -- # uname 00:19:05.027 00:32:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:05.027 00:32:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124425 00:19:05.027 00:32:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:05.027 00:32:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:05.027 00:32:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124425' 00:19:05.027 killing process with pid 124425 00:19:05.027 00:32:58 -- common/autotest_common.sh@955 -- # kill 124425 00:19:05.027 [2024-04-24 00:32:58.632384] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:05.027 00:32:58 -- common/autotest_common.sh@960 -- # wait 124425 00:19:05.027 [2024-04-24 00:32:58.632680] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:06.399 ************************************ 00:19:06.399 END TEST raid_state_function_test 00:19:06.399 ************************************ 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:06.399 00:19:06.399 real 0m13.367s 00:19:06.399 user 0m22.729s 00:19:06.399 sys 0m1.933s 00:19:06.399 00:33:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:06.399 00:33:00 -- common/autotest_common.sh@10 -- # set +x 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:19:06.399 00:33:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:06.399 00:33:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:06.399 00:33:00 -- common/autotest_common.sh@10 -- # set +x 00:19:06.399 ************************************ 00:19:06.399 START TEST raid_state_function_test_sb 00:19:06.399 ************************************ 00:19:06.399 00:33:00 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 3 true 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=124824 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:06.399 Process raid pid: 124824 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124824' 00:19:06.399 00:33:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124824 /var/tmp/spdk-raid.sock 00:19:06.399 00:33:00 -- common/autotest_common.sh@817 -- # '[' -z 124824 ']' 00:19:06.399 00:33:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:06.399 00:33:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:06.399 00:33:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:06.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:06.399 00:33:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:06.399 00:33:00 -- common/autotest_common.sh@10 -- # set +x 00:19:06.657 [2024-04-24 00:33:00.263795] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:19:06.657 [2024-04-24 00:33:00.264164] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.915 [2024-04-24 00:33:00.453783] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.915 [2024-04-24 00:33:00.694064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.172 [2024-04-24 00:33:00.925912] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.747 00:33:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:07.748 00:33:01 -- common/autotest_common.sh@850 -- # return 0 00:19:07.748 00:33:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:08.017 [2024-04-24 00:33:01.580446] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:08.017 [2024-04-24 00:33:01.581841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:08.017 [2024-04-24 00:33:01.582011] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:08.017 [2024-04-24 00:33:01.582098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:08.017 [2024-04-24 00:33:01.582221] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:08.017 [2024-04-24 00:33:01.582337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:08.017 00:33:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:08.017 00:33:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:08.017 00:33:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:08.017 00:33:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:08.017 00:33:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:08.017 00:33:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:08.017 00:33:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:08.017 00:33:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:08.017 00:33:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:08.017 00:33:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:08.017 00:33:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.017 00:33:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.275 00:33:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.275 "name": "Existed_Raid", 00:19:08.275 "uuid": "9b9de795-cedb-4e64-aef2-c3642b2aee95", 00:19:08.275 "strip_size_kb": 64, 00:19:08.275 "state": "configuring", 00:19:08.275 "raid_level": "concat", 00:19:08.275 "superblock": true, 00:19:08.275 "num_base_bdevs": 3, 00:19:08.275 "num_base_bdevs_discovered": 0, 00:19:08.275 "num_base_bdevs_operational": 3, 00:19:08.275 "base_bdevs_list": [ 00:19:08.275 { 00:19:08.275 "name": "BaseBdev1", 00:19:08.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.275 "is_configured": false, 00:19:08.275 "data_offset": 0, 00:19:08.275 "data_size": 0 00:19:08.275 }, 00:19:08.275 { 00:19:08.275 "name": "BaseBdev2", 00:19:08.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.275 "is_configured": false, 00:19:08.275 "data_offset": 0, 00:19:08.275 "data_size": 0 00:19:08.275 }, 00:19:08.275 { 00:19:08.275 "name": "BaseBdev3", 00:19:08.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.275 "is_configured": false, 00:19:08.275 "data_offset": 0, 00:19:08.275 "data_size": 0 00:19:08.275 } 00:19:08.275 ] 00:19:08.275 }' 00:19:08.275 00:33:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.275 00:33:01 -- common/autotest_common.sh@10 -- # set +x 00:19:08.841 00:33:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:09.099 [2024-04-24 00:33:02.800454] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:09.099 [2024-04-24 00:33:02.800715] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:19:09.099 00:33:02 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:09.357 [2024-04-24 00:33:03.020520] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:09.357 [2024-04-24 00:33:03.020784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:09.357 [2024-04-24 00:33:03.020911] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:09.357 [2024-04-24 00:33:03.021041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:09.357 [2024-04-24 00:33:03.021115] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:09.357 [2024-04-24 00:33:03.021215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:09.357 00:33:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:09.662 [2024-04-24 00:33:03.321201] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.662 BaseBdev1 00:19:09.662 00:33:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:09.662 00:33:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:09.662 00:33:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:09.662 00:33:03 -- common/autotest_common.sh@887 -- # local i 00:19:09.662 00:33:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:09.662 00:33:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:09.662 00:33:03 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:09.919 00:33:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:10.177 [ 00:19:10.177 { 00:19:10.177 "name": "BaseBdev1", 00:19:10.177 "aliases": [ 00:19:10.177 "8e6abaa9-eca1-4482-9413-60acc210f2d6" 00:19:10.177 ], 00:19:10.177 "product_name": "Malloc disk", 00:19:10.177 "block_size": 512, 00:19:10.177 "num_blocks": 65536, 00:19:10.177 "uuid": "8e6abaa9-eca1-4482-9413-60acc210f2d6", 00:19:10.177 "assigned_rate_limits": { 00:19:10.177 "rw_ios_per_sec": 0, 00:19:10.177 "rw_mbytes_per_sec": 0, 00:19:10.177 "r_mbytes_per_sec": 0, 00:19:10.177 "w_mbytes_per_sec": 0 00:19:10.177 }, 00:19:10.177 "claimed": true, 00:19:10.177 "claim_type": "exclusive_write", 00:19:10.177 "zoned": false, 00:19:10.177 "supported_io_types": { 00:19:10.177 "read": true, 00:19:10.177 "write": true, 00:19:10.177 "unmap": true, 00:19:10.177 "write_zeroes": true, 00:19:10.177 "flush": true, 00:19:10.177 "reset": true, 00:19:10.177 "compare": false, 00:19:10.177 "compare_and_write": false, 00:19:10.177 "abort": true, 00:19:10.177 "nvme_admin": false, 00:19:10.177 "nvme_io": false 00:19:10.177 }, 00:19:10.177 "memory_domains": [ 00:19:10.177 { 00:19:10.177 "dma_device_id": "system", 00:19:10.177 "dma_device_type": 1 00:19:10.177 }, 00:19:10.177 { 00:19:10.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.177 "dma_device_type": 2 00:19:10.177 } 00:19:10.177 ], 00:19:10.177 "driver_specific": {} 00:19:10.177 } 00:19:10.177 ] 00:19:10.177 00:33:03 -- common/autotest_common.sh@893 -- # return 0 00:19:10.177 00:33:03 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:10.177 00:33:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:10.177 00:33:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:10.177 00:33:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:10.177 00:33:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:10.177 00:33:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:10.177 00:33:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:10.177 00:33:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:10.177 00:33:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:10.177 00:33:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:10.177 00:33:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.177 00:33:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.435 00:33:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:10.435 "name": "Existed_Raid", 00:19:10.435 "uuid": "adea4b49-6fa8-45fd-ba6f-d5832d5952e8", 00:19:10.435 "strip_size_kb": 64, 00:19:10.435 "state": "configuring", 00:19:10.435 "raid_level": "concat", 00:19:10.435 "superblock": true, 00:19:10.435 "num_base_bdevs": 3, 00:19:10.435 "num_base_bdevs_discovered": 1, 00:19:10.435 "num_base_bdevs_operational": 3, 00:19:10.435 "base_bdevs_list": [ 00:19:10.435 { 00:19:10.435 "name": "BaseBdev1", 00:19:10.435 "uuid": "8e6abaa9-eca1-4482-9413-60acc210f2d6", 00:19:10.435 "is_configured": true, 00:19:10.435 "data_offset": 2048, 00:19:10.435 "data_size": 63488 00:19:10.435 }, 00:19:10.435 { 00:19:10.435 "name": "BaseBdev2", 00:19:10.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.435 "is_configured": false, 00:19:10.435 "data_offset": 0, 00:19:10.435 "data_size": 0 00:19:10.435 }, 00:19:10.435 { 00:19:10.435 "name": "BaseBdev3", 00:19:10.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.435 "is_configured": false, 00:19:10.435 "data_offset": 0, 00:19:10.435 "data_size": 0 00:19:10.435 } 00:19:10.435 ] 00:19:10.435 }' 00:19:10.435 00:33:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:10.435 00:33:04 -- common/autotest_common.sh@10 -- # set +x 00:19:11.000 00:33:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:11.257 [2024-04-24 00:33:04.953599] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:11.258 [2024-04-24 00:33:04.953852] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:19:11.258 00:33:04 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:11.258 00:33:04 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:11.824 00:33:05 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:11.824 BaseBdev1 00:19:12.082 00:33:05 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:12.082 00:33:05 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:12.082 00:33:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:12.082 00:33:05 -- common/autotest_common.sh@887 -- # local i 00:19:12.082 00:33:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:12.082 00:33:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:12.082 00:33:05 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.340 00:33:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:12.340 [ 00:19:12.340 { 00:19:12.340 "name": "BaseBdev1", 00:19:12.340 "aliases": [ 00:19:12.340 "d5d973d6-7b8e-485c-939f-aae5654afd35" 00:19:12.340 ], 00:19:12.340 "product_name": "Malloc disk", 00:19:12.340 "block_size": 512, 00:19:12.340 "num_blocks": 65536, 00:19:12.340 "uuid": "d5d973d6-7b8e-485c-939f-aae5654afd35", 00:19:12.340 "assigned_rate_limits": { 00:19:12.340 "rw_ios_per_sec": 0, 00:19:12.340 "rw_mbytes_per_sec": 0, 00:19:12.340 "r_mbytes_per_sec": 0, 00:19:12.340 "w_mbytes_per_sec": 0 00:19:12.340 }, 00:19:12.340 "claimed": false, 00:19:12.340 "zoned": false, 00:19:12.340 "supported_io_types": { 00:19:12.340 "read": true, 00:19:12.340 "write": true, 00:19:12.340 "unmap": true, 00:19:12.340 "write_zeroes": true, 00:19:12.340 "flush": true, 00:19:12.340 "reset": true, 00:19:12.340 "compare": false, 00:19:12.340 "compare_and_write": false, 00:19:12.340 "abort": true, 00:19:12.340 "nvme_admin": false, 00:19:12.340 "nvme_io": false 00:19:12.340 }, 00:19:12.340 "memory_domains": [ 00:19:12.340 { 00:19:12.340 "dma_device_id": "system", 00:19:12.340 "dma_device_type": 1 00:19:12.340 }, 00:19:12.340 { 00:19:12.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.340 "dma_device_type": 2 00:19:12.340 } 00:19:12.340 ], 00:19:12.340 "driver_specific": {} 00:19:12.340 } 00:19:12.340 ] 00:19:12.340 00:33:06 -- common/autotest_common.sh@893 -- # return 0 00:19:12.340 00:33:06 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:12.906 [2024-04-24 00:33:06.403365] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.906 [2024-04-24 00:33:06.405865] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.906 [2024-04-24 00:33:06.406062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.906 [2024-04-24 00:33:06.406152] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:12.906 [2024-04-24 00:33:06.406277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:12.906 "name": "Existed_Raid", 00:19:12.906 "uuid": "7c2394e3-fab7-4256-a125-a2f68a02e424", 00:19:12.906 "strip_size_kb": 64, 00:19:12.906 "state": "configuring", 00:19:12.906 "raid_level": "concat", 00:19:12.906 "superblock": true, 00:19:12.906 "num_base_bdevs": 3, 00:19:12.906 "num_base_bdevs_discovered": 1, 00:19:12.906 "num_base_bdevs_operational": 3, 00:19:12.906 "base_bdevs_list": [ 00:19:12.906 { 00:19:12.906 "name": "BaseBdev1", 00:19:12.906 "uuid": "d5d973d6-7b8e-485c-939f-aae5654afd35", 00:19:12.906 "is_configured": true, 00:19:12.906 "data_offset": 2048, 00:19:12.906 "data_size": 63488 00:19:12.906 }, 00:19:12.906 { 00:19:12.906 "name": "BaseBdev2", 00:19:12.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.906 "is_configured": false, 00:19:12.906 "data_offset": 0, 00:19:12.906 "data_size": 0 00:19:12.906 }, 00:19:12.906 { 00:19:12.906 "name": "BaseBdev3", 00:19:12.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.906 "is_configured": false, 00:19:12.906 "data_offset": 0, 00:19:12.906 "data_size": 0 00:19:12.906 } 00:19:12.906 ] 00:19:12.906 }' 00:19:12.906 00:33:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:12.906 00:33:06 -- common/autotest_common.sh@10 -- # set +x 00:19:13.472 00:33:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:13.731 [2024-04-24 00:33:07.475514] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:13.731 BaseBdev2 00:19:13.731 00:33:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:13.731 00:33:07 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:19:13.731 00:33:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:13.731 00:33:07 -- common/autotest_common.sh@887 -- # local i 00:19:13.731 00:33:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:13.731 00:33:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:13.731 00:33:07 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.310 00:33:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:14.568 [ 00:19:14.568 { 00:19:14.568 "name": "BaseBdev2", 00:19:14.568 "aliases": [ 00:19:14.568 "6eb3a5fe-68fb-4ebe-b24b-5a438570a7ae" 00:19:14.568 ], 00:19:14.568 "product_name": "Malloc disk", 00:19:14.568 "block_size": 512, 00:19:14.568 "num_blocks": 65536, 00:19:14.568 "uuid": "6eb3a5fe-68fb-4ebe-b24b-5a438570a7ae", 00:19:14.568 "assigned_rate_limits": { 00:19:14.568 "rw_ios_per_sec": 0, 00:19:14.568 "rw_mbytes_per_sec": 0, 00:19:14.568 "r_mbytes_per_sec": 0, 00:19:14.568 "w_mbytes_per_sec": 0 00:19:14.568 }, 00:19:14.568 "claimed": true, 00:19:14.568 "claim_type": "exclusive_write", 00:19:14.568 "zoned": false, 00:19:14.568 "supported_io_types": { 00:19:14.568 "read": true, 00:19:14.568 "write": true, 00:19:14.568 "unmap": true, 00:19:14.568 "write_zeroes": true, 00:19:14.568 "flush": true, 00:19:14.568 "reset": true, 00:19:14.568 "compare": false, 00:19:14.568 "compare_and_write": false, 00:19:14.568 "abort": true, 00:19:14.568 "nvme_admin": false, 00:19:14.568 "nvme_io": false 00:19:14.568 }, 00:19:14.568 "memory_domains": [ 00:19:14.568 { 00:19:14.568 "dma_device_id": "system", 00:19:14.568 "dma_device_type": 1 00:19:14.568 }, 00:19:14.568 { 00:19:14.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.568 "dma_device_type": 2 00:19:14.568 } 00:19:14.568 ], 00:19:14.568 "driver_specific": {} 00:19:14.568 } 00:19:14.568 ] 00:19:14.568 00:33:08 -- common/autotest_common.sh@893 -- # return 0 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.568 00:33:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.827 00:33:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:14.828 "name": "Existed_Raid", 00:19:14.828 "uuid": "7c2394e3-fab7-4256-a125-a2f68a02e424", 00:19:14.828 "strip_size_kb": 64, 00:19:14.828 "state": "configuring", 00:19:14.828 "raid_level": "concat", 00:19:14.828 "superblock": true, 00:19:14.828 "num_base_bdevs": 3, 00:19:14.828 "num_base_bdevs_discovered": 2, 00:19:14.828 "num_base_bdevs_operational": 3, 00:19:14.828 "base_bdevs_list": [ 00:19:14.828 { 00:19:14.828 "name": "BaseBdev1", 00:19:14.828 "uuid": "d5d973d6-7b8e-485c-939f-aae5654afd35", 00:19:14.828 "is_configured": true, 00:19:14.828 "data_offset": 2048, 00:19:14.828 "data_size": 63488 00:19:14.828 }, 00:19:14.828 { 00:19:14.828 "name": "BaseBdev2", 00:19:14.828 "uuid": "6eb3a5fe-68fb-4ebe-b24b-5a438570a7ae", 00:19:14.828 "is_configured": true, 00:19:14.828 "data_offset": 2048, 00:19:14.828 "data_size": 63488 00:19:14.828 }, 00:19:14.828 { 00:19:14.828 "name": "BaseBdev3", 00:19:14.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.828 "is_configured": false, 00:19:14.828 "data_offset": 0, 00:19:14.828 "data_size": 0 00:19:14.828 } 00:19:14.828 ] 00:19:14.828 }' 00:19:14.828 00:33:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:14.828 00:33:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.393 00:33:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:15.393 [2024-04-24 00:33:09.172653] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:15.393 [2024-04-24 00:33:09.173267] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:19:15.393 [2024-04-24 00:33:09.173422] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:15.393 [2024-04-24 00:33:09.173649] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:15.393 [2024-04-24 00:33:09.174165] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:19:15.393 [2024-04-24 00:33:09.174305] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:19:15.393 BaseBdev3 00:19:15.393 [2024-04-24 00:33:09.174610] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.651 00:33:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:15.651 00:33:09 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:19:15.651 00:33:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:15.651 00:33:09 -- common/autotest_common.sh@887 -- # local i 00:19:15.651 00:33:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:15.651 00:33:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:15.651 00:33:09 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:15.651 00:33:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:15.909 [ 00:19:15.909 { 00:19:15.909 "name": "BaseBdev3", 00:19:15.909 "aliases": [ 00:19:15.909 "ef59a241-fa4c-4cff-bcc3-f7fc779bb62e" 00:19:15.909 ], 00:19:15.909 "product_name": "Malloc disk", 00:19:15.909 "block_size": 512, 00:19:15.909 "num_blocks": 65536, 00:19:15.909 "uuid": "ef59a241-fa4c-4cff-bcc3-f7fc779bb62e", 00:19:15.909 "assigned_rate_limits": { 00:19:15.909 "rw_ios_per_sec": 0, 00:19:15.909 "rw_mbytes_per_sec": 0, 00:19:15.909 "r_mbytes_per_sec": 0, 00:19:15.909 "w_mbytes_per_sec": 0 00:19:15.909 }, 00:19:15.909 "claimed": true, 00:19:15.909 "claim_type": "exclusive_write", 00:19:15.909 "zoned": false, 00:19:15.909 "supported_io_types": { 00:19:15.909 "read": true, 00:19:15.909 "write": true, 00:19:15.909 "unmap": true, 00:19:15.909 "write_zeroes": true, 00:19:15.909 "flush": true, 00:19:15.909 "reset": true, 00:19:15.909 "compare": false, 00:19:15.909 "compare_and_write": false, 00:19:15.909 "abort": true, 00:19:15.909 "nvme_admin": false, 00:19:15.909 "nvme_io": false 00:19:15.909 }, 00:19:15.909 "memory_domains": [ 00:19:15.909 { 00:19:15.909 "dma_device_id": "system", 00:19:15.909 "dma_device_type": 1 00:19:15.909 }, 00:19:15.909 { 00:19:15.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.909 "dma_device_type": 2 00:19:15.909 } 00:19:15.909 ], 00:19:15.909 "driver_specific": {} 00:19:15.909 } 00:19:15.909 ] 00:19:15.909 00:33:09 -- common/autotest_common.sh@893 -- # return 0 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.909 00:33:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.174 00:33:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:16.174 "name": "Existed_Raid", 00:19:16.174 "uuid": "7c2394e3-fab7-4256-a125-a2f68a02e424", 00:19:16.174 "strip_size_kb": 64, 00:19:16.174 "state": "online", 00:19:16.174 "raid_level": "concat", 00:19:16.174 "superblock": true, 00:19:16.174 "num_base_bdevs": 3, 00:19:16.174 "num_base_bdevs_discovered": 3, 00:19:16.174 "num_base_bdevs_operational": 3, 00:19:16.174 "base_bdevs_list": [ 00:19:16.174 { 00:19:16.174 "name": "BaseBdev1", 00:19:16.174 "uuid": "d5d973d6-7b8e-485c-939f-aae5654afd35", 00:19:16.174 "is_configured": true, 00:19:16.174 "data_offset": 2048, 00:19:16.174 "data_size": 63488 00:19:16.174 }, 00:19:16.174 { 00:19:16.174 "name": "BaseBdev2", 00:19:16.174 "uuid": "6eb3a5fe-68fb-4ebe-b24b-5a438570a7ae", 00:19:16.174 "is_configured": true, 00:19:16.174 "data_offset": 2048, 00:19:16.174 "data_size": 63488 00:19:16.174 }, 00:19:16.174 { 00:19:16.174 "name": "BaseBdev3", 00:19:16.174 "uuid": "ef59a241-fa4c-4cff-bcc3-f7fc779bb62e", 00:19:16.174 "is_configured": true, 00:19:16.174 "data_offset": 2048, 00:19:16.174 "data_size": 63488 00:19:16.174 } 00:19:16.174 ] 00:19:16.174 }' 00:19:16.174 00:33:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:16.174 00:33:09 -- common/autotest_common.sh@10 -- # set +x 00:19:16.739 00:33:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:16.997 [2024-04-24 00:33:10.629119] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:16.997 [2024-04-24 00:33:10.629345] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.997 [2024-04-24 00:33:10.629492] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.997 00:33:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.255 00:33:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.255 "name": "Existed_Raid", 00:19:17.255 "uuid": "7c2394e3-fab7-4256-a125-a2f68a02e424", 00:19:17.255 "strip_size_kb": 64, 00:19:17.255 "state": "offline", 00:19:17.255 "raid_level": "concat", 00:19:17.255 "superblock": true, 00:19:17.255 "num_base_bdevs": 3, 00:19:17.255 "num_base_bdevs_discovered": 2, 00:19:17.255 "num_base_bdevs_operational": 2, 00:19:17.255 "base_bdevs_list": [ 00:19:17.255 { 00:19:17.255 "name": null, 00:19:17.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.255 "is_configured": false, 00:19:17.255 "data_offset": 2048, 00:19:17.255 "data_size": 63488 00:19:17.255 }, 00:19:17.255 { 00:19:17.255 "name": "BaseBdev2", 00:19:17.255 "uuid": "6eb3a5fe-68fb-4ebe-b24b-5a438570a7ae", 00:19:17.255 "is_configured": true, 00:19:17.255 "data_offset": 2048, 00:19:17.255 "data_size": 63488 00:19:17.255 }, 00:19:17.255 { 00:19:17.255 "name": "BaseBdev3", 00:19:17.255 "uuid": "ef59a241-fa4c-4cff-bcc3-f7fc779bb62e", 00:19:17.255 "is_configured": true, 00:19:17.255 "data_offset": 2048, 00:19:17.255 "data_size": 63488 00:19:17.255 } 00:19:17.255 ] 00:19:17.255 }' 00:19:17.255 00:33:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.255 00:33:11 -- common/autotest_common.sh@10 -- # set +x 00:19:18.254 00:33:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:18.254 00:33:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:18.254 00:33:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.254 00:33:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:18.254 00:33:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:18.254 00:33:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:18.254 00:33:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:18.513 [2024-04-24 00:33:12.248079] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:18.771 00:33:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:18.771 00:33:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:18.771 00:33:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.771 00:33:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:19.027 00:33:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:19.027 00:33:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:19.027 00:33:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:19.285 [2024-04-24 00:33:12.953558] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:19.285 [2024-04-24 00:33:12.953824] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:19:19.544 00:33:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:19.544 00:33:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:19.544 00:33:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:19.544 00:33:13 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.835 00:33:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:19.835 00:33:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:19.835 00:33:13 -- bdev/bdev_raid.sh@287 -- # killprocess 124824 00:19:19.835 00:33:13 -- common/autotest_common.sh@936 -- # '[' -z 124824 ']' 00:19:19.835 00:33:13 -- common/autotest_common.sh@940 -- # kill -0 124824 00:19:19.835 00:33:13 -- common/autotest_common.sh@941 -- # uname 00:19:19.835 00:33:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:19.835 00:33:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124824 00:19:19.835 killing process with pid 124824 00:19:19.835 00:33:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:19.835 00:33:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:19.835 00:33:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124824' 00:19:19.835 00:33:13 -- common/autotest_common.sh@955 -- # kill 124824 00:19:19.835 [2024-04-24 00:33:13.397800] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:19.835 00:33:13 -- common/autotest_common.sh@960 -- # wait 124824 00:19:19.835 [2024-04-24 00:33:13.397941] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:21.210 ************************************ 00:19:21.210 END TEST raid_state_function_test_sb 00:19:21.210 ************************************ 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:21.210 00:19:21.210 real 0m14.680s 00:19:21.210 user 0m25.077s 00:19:21.210 sys 0m2.057s 00:19:21.210 00:33:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:21.210 00:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:19:21.210 00:33:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:21.210 00:33:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:21.210 00:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:21.210 ************************************ 00:19:21.210 START TEST raid_superblock_test 00:19:21.210 ************************************ 00:19:21.210 00:33:14 -- common/autotest_common.sh@1111 -- # raid_superblock_test concat 3 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@357 -- # raid_pid=125241 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:21.210 00:33:14 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125241 /var/tmp/spdk-raid.sock 00:19:21.210 00:33:14 -- common/autotest_common.sh@817 -- # '[' -z 125241 ']' 00:19:21.210 00:33:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:21.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:21.210 00:33:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:21.210 00:33:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:21.210 00:33:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:21.210 00:33:14 -- common/autotest_common.sh@10 -- # set +x 00:19:21.467 [2024-04-24 00:33:15.022394] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:19:21.467 [2024-04-24 00:33:15.022949] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125241 ] 00:19:21.467 [2024-04-24 00:33:15.205854] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.725 [2024-04-24 00:33:15.429918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.984 [2024-04-24 00:33:15.649488] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:22.242 00:33:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:22.242 00:33:15 -- common/autotest_common.sh@850 -- # return 0 00:19:22.242 00:33:15 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:22.242 00:33:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:22.242 00:33:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:22.242 00:33:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:22.242 00:33:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:22.242 00:33:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:22.242 00:33:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:22.242 00:33:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:22.242 00:33:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:22.500 malloc1 00:19:22.500 00:33:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:22.758 [2024-04-24 00:33:16.403396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:22.758 [2024-04-24 00:33:16.403700] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.758 [2024-04-24 00:33:16.403859] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:19:22.758 [2024-04-24 00:33:16.404008] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.758 [2024-04-24 00:33:16.406777] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.758 [2024-04-24 00:33:16.406965] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:22.758 pt1 00:19:22.758 00:33:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:22.758 00:33:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:22.758 00:33:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:22.758 00:33:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:22.758 00:33:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:22.758 00:33:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:22.758 00:33:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:22.758 00:33:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:22.758 00:33:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:23.016 malloc2 00:19:23.016 00:33:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:23.274 [2024-04-24 00:33:16.895602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:23.274 [2024-04-24 00:33:16.895844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.274 [2024-04-24 00:33:16.895995] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:23.274 [2024-04-24 00:33:16.896129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.274 [2024-04-24 00:33:16.898777] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.274 [2024-04-24 00:33:16.898965] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:23.274 pt2 00:19:23.274 00:33:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:23.274 00:33:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:23.274 00:33:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:23.274 00:33:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:23.274 00:33:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:23.274 00:33:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:23.274 00:33:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:23.274 00:33:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:23.274 00:33:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:23.532 malloc3 00:19:23.532 00:33:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:23.789 [2024-04-24 00:33:17.401968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:23.789 [2024-04-24 00:33:17.402281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.789 [2024-04-24 00:33:17.402418] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:19:23.789 [2024-04-24 00:33:17.402541] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.789 [2024-04-24 00:33:17.404801] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.790 [2024-04-24 00:33:17.404999] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:23.790 pt3 00:19:23.790 00:33:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:23.790 00:33:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:23.790 00:33:17 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:24.048 [2024-04-24 00:33:17.658080] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:24.048 [2024-04-24 00:33:17.660212] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:24.048 [2024-04-24 00:33:17.660398] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:24.048 [2024-04-24 00:33:17.660678] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:19:24.048 [2024-04-24 00:33:17.660773] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:24.048 [2024-04-24 00:33:17.660993] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:24.048 [2024-04-24 00:33:17.661457] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:19:24.048 [2024-04-24 00:33:17.661571] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:19:24.048 [2024-04-24 00:33:17.661815] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.048 00:33:17 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:24.048 00:33:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:24.048 00:33:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:24.048 00:33:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:24.048 00:33:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:24.048 00:33:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:24.048 00:33:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:24.048 00:33:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:24.048 00:33:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:24.048 00:33:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:24.048 00:33:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.048 00:33:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.306 00:33:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:24.306 "name": "raid_bdev1", 00:19:24.306 "uuid": "c4850e7d-fe38-4609-afa3-d975433466b5", 00:19:24.306 "strip_size_kb": 64, 00:19:24.306 "state": "online", 00:19:24.306 "raid_level": "concat", 00:19:24.306 "superblock": true, 00:19:24.306 "num_base_bdevs": 3, 00:19:24.306 "num_base_bdevs_discovered": 3, 00:19:24.306 "num_base_bdevs_operational": 3, 00:19:24.306 "base_bdevs_list": [ 00:19:24.306 { 00:19:24.306 "name": "pt1", 00:19:24.306 "uuid": "833ef1a3-bd80-53c7-82fc-352ee16e7521", 00:19:24.306 "is_configured": true, 00:19:24.306 "data_offset": 2048, 00:19:24.306 "data_size": 63488 00:19:24.306 }, 00:19:24.306 { 00:19:24.306 "name": "pt2", 00:19:24.306 "uuid": "50e141b0-1c53-5be6-a04d-962a1551ef4b", 00:19:24.306 "is_configured": true, 00:19:24.306 "data_offset": 2048, 00:19:24.306 "data_size": 63488 00:19:24.306 }, 00:19:24.306 { 00:19:24.306 "name": "pt3", 00:19:24.306 "uuid": "689062df-c4a2-5c4e-918e-3e80b6749d39", 00:19:24.306 "is_configured": true, 00:19:24.306 "data_offset": 2048, 00:19:24.306 "data_size": 63488 00:19:24.306 } 00:19:24.306 ] 00:19:24.306 }' 00:19:24.306 00:33:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:24.306 00:33:17 -- common/autotest_common.sh@10 -- # set +x 00:19:24.871 00:33:18 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:24.871 00:33:18 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:25.129 [2024-04-24 00:33:18.694605] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.129 00:33:18 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=c4850e7d-fe38-4609-afa3-d975433466b5 00:19:25.129 00:33:18 -- bdev/bdev_raid.sh@380 -- # '[' -z c4850e7d-fe38-4609-afa3-d975433466b5 ']' 00:19:25.129 00:33:18 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:25.387 [2024-04-24 00:33:19.026410] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:25.387 [2024-04-24 00:33:19.026665] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.387 [2024-04-24 00:33:19.026890] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.387 [2024-04-24 00:33:19.027037] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.387 [2024-04-24 00:33:19.027264] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:19:25.387 00:33:19 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.387 00:33:19 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:25.654 00:33:19 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:25.654 00:33:19 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:25.654 00:33:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:25.654 00:33:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:25.912 00:33:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:25.912 00:33:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:26.169 00:33:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:26.169 00:33:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:26.424 00:33:20 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:26.424 00:33:20 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:26.683 00:33:20 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:26.683 00:33:20 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:26.683 00:33:20 -- common/autotest_common.sh@638 -- # local es=0 00:19:26.683 00:33:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:26.683 00:33:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.683 00:33:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:26.683 00:33:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.683 00:33:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:26.683 00:33:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.683 00:33:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:26.683 00:33:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.683 00:33:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:26.683 00:33:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:26.942 [2024-04-24 00:33:20.602730] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:26.942 [2024-04-24 00:33:20.605260] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:26.942 [2024-04-24 00:33:20.605499] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:26.942 [2024-04-24 00:33:20.605595] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:26.942 [2024-04-24 00:33:20.605815] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:26.942 [2024-04-24 00:33:20.605973] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:26.942 [2024-04-24 00:33:20.606122] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:26.942 [2024-04-24 00:33:20.606234] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:19:26.942 request: 00:19:26.942 { 00:19:26.942 "name": "raid_bdev1", 00:19:26.942 "raid_level": "concat", 00:19:26.942 "base_bdevs": [ 00:19:26.942 "malloc1", 00:19:26.942 "malloc2", 00:19:26.942 "malloc3" 00:19:26.942 ], 00:19:26.942 "superblock": false, 00:19:26.942 "strip_size_kb": 64, 00:19:26.942 "method": "bdev_raid_create", 00:19:26.942 "req_id": 1 00:19:26.942 } 00:19:26.942 Got JSON-RPC error response 00:19:26.942 response: 00:19:26.942 { 00:19:26.942 "code": -17, 00:19:26.942 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:26.942 } 00:19:26.942 00:33:20 -- common/autotest_common.sh@641 -- # es=1 00:19:26.942 00:33:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:26.942 00:33:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:26.942 00:33:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:26.942 00:33:20 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:26.942 00:33:20 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.200 00:33:20 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:27.200 00:33:20 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:27.200 00:33:20 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:27.459 [2024-04-24 00:33:21.038841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:27.459 [2024-04-24 00:33:21.039142] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.459 [2024-04-24 00:33:21.039242] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:27.459 [2024-04-24 00:33:21.039347] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.459 [2024-04-24 00:33:21.042076] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.459 [2024-04-24 00:33:21.042279] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:27.459 [2024-04-24 00:33:21.042557] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:27.459 [2024-04-24 00:33:21.042712] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:27.459 pt1 00:19:27.459 00:33:21 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:27.459 00:33:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:27.459 00:33:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:27.459 00:33:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:27.459 00:33:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:27.459 00:33:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:27.459 00:33:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:27.459 00:33:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:27.459 00:33:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:27.459 00:33:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:27.459 00:33:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.459 00:33:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.718 00:33:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:27.718 "name": "raid_bdev1", 00:19:27.718 "uuid": "c4850e7d-fe38-4609-afa3-d975433466b5", 00:19:27.718 "strip_size_kb": 64, 00:19:27.718 "state": "configuring", 00:19:27.718 "raid_level": "concat", 00:19:27.718 "superblock": true, 00:19:27.718 "num_base_bdevs": 3, 00:19:27.718 "num_base_bdevs_discovered": 1, 00:19:27.718 "num_base_bdevs_operational": 3, 00:19:27.718 "base_bdevs_list": [ 00:19:27.718 { 00:19:27.718 "name": "pt1", 00:19:27.718 "uuid": "833ef1a3-bd80-53c7-82fc-352ee16e7521", 00:19:27.718 "is_configured": true, 00:19:27.718 "data_offset": 2048, 00:19:27.718 "data_size": 63488 00:19:27.718 }, 00:19:27.718 { 00:19:27.718 "name": null, 00:19:27.718 "uuid": "50e141b0-1c53-5be6-a04d-962a1551ef4b", 00:19:27.718 "is_configured": false, 00:19:27.718 "data_offset": 2048, 00:19:27.718 "data_size": 63488 00:19:27.718 }, 00:19:27.718 { 00:19:27.718 "name": null, 00:19:27.718 "uuid": "689062df-c4a2-5c4e-918e-3e80b6749d39", 00:19:27.718 "is_configured": false, 00:19:27.718 "data_offset": 2048, 00:19:27.718 "data_size": 63488 00:19:27.718 } 00:19:27.718 ] 00:19:27.718 }' 00:19:27.718 00:33:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:27.718 00:33:21 -- common/autotest_common.sh@10 -- # set +x 00:19:28.285 00:33:21 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:19:28.285 00:33:21 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:28.543 [2024-04-24 00:33:22.175319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:28.543 [2024-04-24 00:33:22.175655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.543 [2024-04-24 00:33:22.175757] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:28.543 [2024-04-24 00:33:22.175877] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.543 [2024-04-24 00:33:22.176426] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.543 [2024-04-24 00:33:22.176597] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:28.543 [2024-04-24 00:33:22.176868] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:28.543 [2024-04-24 00:33:22.176996] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:28.543 pt2 00:19:28.543 00:33:22 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:28.801 [2024-04-24 00:33:22.395459] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:28.801 00:33:22 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:28.801 00:33:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:28.801 00:33:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:28.801 00:33:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:28.801 00:33:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:28.801 00:33:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:28.801 00:33:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.801 00:33:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.801 00:33:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.801 00:33:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.801 00:33:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.801 00:33:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.059 00:33:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:29.059 "name": "raid_bdev1", 00:19:29.059 "uuid": "c4850e7d-fe38-4609-afa3-d975433466b5", 00:19:29.059 "strip_size_kb": 64, 00:19:29.059 "state": "configuring", 00:19:29.059 "raid_level": "concat", 00:19:29.059 "superblock": true, 00:19:29.059 "num_base_bdevs": 3, 00:19:29.059 "num_base_bdevs_discovered": 1, 00:19:29.059 "num_base_bdevs_operational": 3, 00:19:29.059 "base_bdevs_list": [ 00:19:29.059 { 00:19:29.059 "name": "pt1", 00:19:29.059 "uuid": "833ef1a3-bd80-53c7-82fc-352ee16e7521", 00:19:29.059 "is_configured": true, 00:19:29.059 "data_offset": 2048, 00:19:29.059 "data_size": 63488 00:19:29.059 }, 00:19:29.059 { 00:19:29.059 "name": null, 00:19:29.059 "uuid": "50e141b0-1c53-5be6-a04d-962a1551ef4b", 00:19:29.059 "is_configured": false, 00:19:29.059 "data_offset": 2048, 00:19:29.059 "data_size": 63488 00:19:29.059 }, 00:19:29.059 { 00:19:29.059 "name": null, 00:19:29.059 "uuid": "689062df-c4a2-5c4e-918e-3e80b6749d39", 00:19:29.059 "is_configured": false, 00:19:29.059 "data_offset": 2048, 00:19:29.059 "data_size": 63488 00:19:29.059 } 00:19:29.059 ] 00:19:29.059 }' 00:19:29.059 00:33:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:29.059 00:33:22 -- common/autotest_common.sh@10 -- # set +x 00:19:29.626 00:33:23 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:29.626 00:33:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:29.626 00:33:23 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:29.626 [2024-04-24 00:33:23.347598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:29.626 [2024-04-24 00:33:23.347885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.626 [2024-04-24 00:33:23.347954] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:29.626 [2024-04-24 00:33:23.348050] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.626 [2024-04-24 00:33:23.348511] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.626 [2024-04-24 00:33:23.348663] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:29.626 [2024-04-24 00:33:23.348879] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:29.626 [2024-04-24 00:33:23.349013] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:29.626 pt2 00:19:29.626 00:33:23 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:29.626 00:33:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:29.626 00:33:23 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:29.885 [2024-04-24 00:33:23.547628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:29.885 [2024-04-24 00:33:23.547889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.885 [2024-04-24 00:33:23.547962] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:29.885 [2024-04-24 00:33:23.548065] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.885 [2024-04-24 00:33:23.548576] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.885 [2024-04-24 00:33:23.548722] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:29.885 [2024-04-24 00:33:23.548993] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:29.885 [2024-04-24 00:33:23.549111] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:29.885 [2024-04-24 00:33:23.549277] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:19:29.885 [2024-04-24 00:33:23.549359] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:29.885 [2024-04-24 00:33:23.549553] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:29.885 [2024-04-24 00:33:23.549984] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:19:29.885 [2024-04-24 00:33:23.550112] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:19:29.885 [2024-04-24 00:33:23.550401] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.885 pt3 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.885 00:33:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.144 00:33:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:30.144 "name": "raid_bdev1", 00:19:30.144 "uuid": "c4850e7d-fe38-4609-afa3-d975433466b5", 00:19:30.144 "strip_size_kb": 64, 00:19:30.144 "state": "online", 00:19:30.144 "raid_level": "concat", 00:19:30.144 "superblock": true, 00:19:30.144 "num_base_bdevs": 3, 00:19:30.144 "num_base_bdevs_discovered": 3, 00:19:30.144 "num_base_bdevs_operational": 3, 00:19:30.144 "base_bdevs_list": [ 00:19:30.144 { 00:19:30.144 "name": "pt1", 00:19:30.144 "uuid": "833ef1a3-bd80-53c7-82fc-352ee16e7521", 00:19:30.144 "is_configured": true, 00:19:30.144 "data_offset": 2048, 00:19:30.144 "data_size": 63488 00:19:30.144 }, 00:19:30.144 { 00:19:30.144 "name": "pt2", 00:19:30.144 "uuid": "50e141b0-1c53-5be6-a04d-962a1551ef4b", 00:19:30.144 "is_configured": true, 00:19:30.144 "data_offset": 2048, 00:19:30.144 "data_size": 63488 00:19:30.144 }, 00:19:30.144 { 00:19:30.144 "name": "pt3", 00:19:30.144 "uuid": "689062df-c4a2-5c4e-918e-3e80b6749d39", 00:19:30.144 "is_configured": true, 00:19:30.144 "data_offset": 2048, 00:19:30.144 "data_size": 63488 00:19:30.144 } 00:19:30.144 ] 00:19:30.144 }' 00:19:30.144 00:33:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:30.144 00:33:23 -- common/autotest_common.sh@10 -- # set +x 00:19:30.725 00:33:24 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:30.725 00:33:24 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:30.984 [2024-04-24 00:33:24.592168] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.984 00:33:24 -- bdev/bdev_raid.sh@430 -- # '[' c4850e7d-fe38-4609-afa3-d975433466b5 '!=' c4850e7d-fe38-4609-afa3-d975433466b5 ']' 00:19:30.984 00:33:24 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:19:30.984 00:33:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:30.984 00:33:24 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:30.984 00:33:24 -- bdev/bdev_raid.sh@511 -- # killprocess 125241 00:19:30.984 00:33:24 -- common/autotest_common.sh@936 -- # '[' -z 125241 ']' 00:19:30.984 00:33:24 -- common/autotest_common.sh@940 -- # kill -0 125241 00:19:30.984 00:33:24 -- common/autotest_common.sh@941 -- # uname 00:19:30.984 00:33:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:30.984 00:33:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125241 00:19:30.984 00:33:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:30.984 00:33:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:30.984 00:33:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125241' 00:19:30.984 killing process with pid 125241 00:19:30.984 00:33:24 -- common/autotest_common.sh@955 -- # kill 125241 00:19:30.984 [2024-04-24 00:33:24.641225] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:30.984 00:33:24 -- common/autotest_common.sh@960 -- # wait 125241 00:19:30.984 [2024-04-24 00:33:24.641488] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.984 [2024-04-24 00:33:24.641654] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:30.984 [2024-04-24 00:33:24.641736] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:19:31.241 [2024-04-24 00:33:24.965966] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.612 00:33:26 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:32.612 00:19:32.612 real 0m11.456s 00:19:32.612 user 0m19.115s 00:19:32.612 sys 0m1.629s 00:19:32.612 00:33:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:32.870 00:33:26 -- common/autotest_common.sh@10 -- # set +x 00:19:32.870 ************************************ 00:19:32.870 END TEST raid_superblock_test 00:19:32.870 ************************************ 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:19:32.870 00:33:26 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:32.870 00:33:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:32.870 00:33:26 -- common/autotest_common.sh@10 -- # set +x 00:19:32.870 ************************************ 00:19:32.870 START TEST raid_state_function_test 00:19:32.870 ************************************ 00:19:32.870 00:33:26 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 3 false 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@226 -- # raid_pid=125563 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125563' 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:32.870 Process raid pid: 125563 00:19:32.870 00:33:26 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125563 /var/tmp/spdk-raid.sock 00:19:32.870 00:33:26 -- common/autotest_common.sh@817 -- # '[' -z 125563 ']' 00:19:32.870 00:33:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:32.870 00:33:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:32.870 00:33:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:32.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:32.870 00:33:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:32.870 00:33:26 -- common/autotest_common.sh@10 -- # set +x 00:19:32.870 [2024-04-24 00:33:26.581816] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:19:32.870 [2024-04-24 00:33:26.582173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.128 [2024-04-24 00:33:26.750591] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.385 [2024-04-24 00:33:26.970353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.642 [2024-04-24 00:33:27.177121] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.900 00:33:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:33.900 00:33:27 -- common/autotest_common.sh@850 -- # return 0 00:19:33.900 00:33:27 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:34.158 [2024-04-24 00:33:27.735911] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:34.158 [2024-04-24 00:33:27.736169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:34.158 [2024-04-24 00:33:27.736289] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:34.158 [2024-04-24 00:33:27.736388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:34.158 [2024-04-24 00:33:27.736462] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:34.158 [2024-04-24 00:33:27.736543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:34.158 00:33:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:34.158 00:33:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:34.158 00:33:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:34.158 00:33:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:34.158 00:33:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:34.158 00:33:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:34.158 00:33:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:34.158 00:33:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:34.158 00:33:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:34.158 00:33:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:34.158 00:33:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.158 00:33:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.416 00:33:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:34.416 "name": "Existed_Raid", 00:19:34.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.416 "strip_size_kb": 0, 00:19:34.416 "state": "configuring", 00:19:34.416 "raid_level": "raid1", 00:19:34.416 "superblock": false, 00:19:34.416 "num_base_bdevs": 3, 00:19:34.416 "num_base_bdevs_discovered": 0, 00:19:34.416 "num_base_bdevs_operational": 3, 00:19:34.416 "base_bdevs_list": [ 00:19:34.416 { 00:19:34.416 "name": "BaseBdev1", 00:19:34.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.416 "is_configured": false, 00:19:34.416 "data_offset": 0, 00:19:34.416 "data_size": 0 00:19:34.416 }, 00:19:34.416 { 00:19:34.416 "name": "BaseBdev2", 00:19:34.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.416 "is_configured": false, 00:19:34.416 "data_offset": 0, 00:19:34.416 "data_size": 0 00:19:34.416 }, 00:19:34.416 { 00:19:34.416 "name": "BaseBdev3", 00:19:34.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.416 "is_configured": false, 00:19:34.416 "data_offset": 0, 00:19:34.416 "data_size": 0 00:19:34.416 } 00:19:34.416 ] 00:19:34.416 }' 00:19:34.416 00:33:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:34.416 00:33:28 -- common/autotest_common.sh@10 -- # set +x 00:19:34.982 00:33:28 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:35.240 [2024-04-24 00:33:28.928182] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:35.240 [2024-04-24 00:33:28.928363] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:19:35.240 00:33:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:35.535 [2024-04-24 00:33:29.196217] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:35.535 [2024-04-24 00:33:29.196490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:35.535 [2024-04-24 00:33:29.196631] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:35.535 [2024-04-24 00:33:29.196683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:35.535 [2024-04-24 00:33:29.196709] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:35.535 [2024-04-24 00:33:29.196805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:35.535 00:33:29 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:35.795 [2024-04-24 00:33:29.453212] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:35.795 BaseBdev1 00:19:35.795 00:33:29 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:35.795 00:33:29 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:35.795 00:33:29 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:35.795 00:33:29 -- common/autotest_common.sh@887 -- # local i 00:19:35.795 00:33:29 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:35.795 00:33:29 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:35.795 00:33:29 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:36.054 00:33:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:36.311 [ 00:19:36.311 { 00:19:36.311 "name": "BaseBdev1", 00:19:36.311 "aliases": [ 00:19:36.311 "aff40af5-2932-4744-b69e-7ad31e532d20" 00:19:36.311 ], 00:19:36.311 "product_name": "Malloc disk", 00:19:36.311 "block_size": 512, 00:19:36.311 "num_blocks": 65536, 00:19:36.311 "uuid": "aff40af5-2932-4744-b69e-7ad31e532d20", 00:19:36.311 "assigned_rate_limits": { 00:19:36.311 "rw_ios_per_sec": 0, 00:19:36.311 "rw_mbytes_per_sec": 0, 00:19:36.311 "r_mbytes_per_sec": 0, 00:19:36.311 "w_mbytes_per_sec": 0 00:19:36.311 }, 00:19:36.311 "claimed": true, 00:19:36.311 "claim_type": "exclusive_write", 00:19:36.311 "zoned": false, 00:19:36.311 "supported_io_types": { 00:19:36.311 "read": true, 00:19:36.311 "write": true, 00:19:36.311 "unmap": true, 00:19:36.311 "write_zeroes": true, 00:19:36.311 "flush": true, 00:19:36.311 "reset": true, 00:19:36.311 "compare": false, 00:19:36.311 "compare_and_write": false, 00:19:36.311 "abort": true, 00:19:36.311 "nvme_admin": false, 00:19:36.311 "nvme_io": false 00:19:36.311 }, 00:19:36.311 "memory_domains": [ 00:19:36.311 { 00:19:36.311 "dma_device_id": "system", 00:19:36.311 "dma_device_type": 1 00:19:36.311 }, 00:19:36.311 { 00:19:36.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.311 "dma_device_type": 2 00:19:36.311 } 00:19:36.311 ], 00:19:36.311 "driver_specific": {} 00:19:36.311 } 00:19:36.311 ] 00:19:36.311 00:33:29 -- common/autotest_common.sh@893 -- # return 0 00:19:36.311 00:33:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:36.311 00:33:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:36.311 00:33:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:36.311 00:33:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:36.311 00:33:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:36.311 00:33:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:36.311 00:33:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.311 00:33:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.311 00:33:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.311 00:33:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.311 00:33:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.311 00:33:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.568 00:33:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:36.568 "name": "Existed_Raid", 00:19:36.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.568 "strip_size_kb": 0, 00:19:36.568 "state": "configuring", 00:19:36.568 "raid_level": "raid1", 00:19:36.568 "superblock": false, 00:19:36.568 "num_base_bdevs": 3, 00:19:36.568 "num_base_bdevs_discovered": 1, 00:19:36.568 "num_base_bdevs_operational": 3, 00:19:36.568 "base_bdevs_list": [ 00:19:36.568 { 00:19:36.568 "name": "BaseBdev1", 00:19:36.568 "uuid": "aff40af5-2932-4744-b69e-7ad31e532d20", 00:19:36.568 "is_configured": true, 00:19:36.568 "data_offset": 0, 00:19:36.568 "data_size": 65536 00:19:36.568 }, 00:19:36.568 { 00:19:36.568 "name": "BaseBdev2", 00:19:36.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.568 "is_configured": false, 00:19:36.568 "data_offset": 0, 00:19:36.568 "data_size": 0 00:19:36.568 }, 00:19:36.568 { 00:19:36.568 "name": "BaseBdev3", 00:19:36.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.568 "is_configured": false, 00:19:36.568 "data_offset": 0, 00:19:36.568 "data_size": 0 00:19:36.568 } 00:19:36.568 ] 00:19:36.568 }' 00:19:36.568 00:33:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:36.568 00:33:30 -- common/autotest_common.sh@10 -- # set +x 00:19:37.150 00:33:30 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:37.408 [2024-04-24 00:33:31.037622] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:37.408 [2024-04-24 00:33:31.037885] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:19:37.408 00:33:31 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:37.408 00:33:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:37.664 [2024-04-24 00:33:31.265688] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:37.664 [2024-04-24 00:33:31.268116] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:37.664 [2024-04-24 00:33:31.268320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:37.664 [2024-04-24 00:33:31.268422] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:37.664 [2024-04-24 00:33:31.268486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.664 00:33:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.922 00:33:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:37.922 "name": "Existed_Raid", 00:19:37.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.922 "strip_size_kb": 0, 00:19:37.922 "state": "configuring", 00:19:37.922 "raid_level": "raid1", 00:19:37.922 "superblock": false, 00:19:37.922 "num_base_bdevs": 3, 00:19:37.922 "num_base_bdevs_discovered": 1, 00:19:37.922 "num_base_bdevs_operational": 3, 00:19:37.922 "base_bdevs_list": [ 00:19:37.922 { 00:19:37.922 "name": "BaseBdev1", 00:19:37.922 "uuid": "aff40af5-2932-4744-b69e-7ad31e532d20", 00:19:37.922 "is_configured": true, 00:19:37.922 "data_offset": 0, 00:19:37.922 "data_size": 65536 00:19:37.922 }, 00:19:37.922 { 00:19:37.922 "name": "BaseBdev2", 00:19:37.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.922 "is_configured": false, 00:19:37.922 "data_offset": 0, 00:19:37.922 "data_size": 0 00:19:37.922 }, 00:19:37.922 { 00:19:37.922 "name": "BaseBdev3", 00:19:37.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.922 "is_configured": false, 00:19:37.922 "data_offset": 0, 00:19:37.922 "data_size": 0 00:19:37.922 } 00:19:37.922 ] 00:19:37.922 }' 00:19:37.922 00:33:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:37.922 00:33:31 -- common/autotest_common.sh@10 -- # set +x 00:19:38.488 00:33:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:39.082 [2024-04-24 00:33:32.583718] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:39.082 BaseBdev2 00:19:39.082 00:33:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:39.082 00:33:32 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:19:39.082 00:33:32 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:39.082 00:33:32 -- common/autotest_common.sh@887 -- # local i 00:19:39.082 00:33:32 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:39.082 00:33:32 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:39.082 00:33:32 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:39.082 00:33:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:39.340 [ 00:19:39.341 { 00:19:39.341 "name": "BaseBdev2", 00:19:39.341 "aliases": [ 00:19:39.341 "c62ee327-1a9c-4eab-8a96-6f963ed17d89" 00:19:39.341 ], 00:19:39.341 "product_name": "Malloc disk", 00:19:39.341 "block_size": 512, 00:19:39.341 "num_blocks": 65536, 00:19:39.341 "uuid": "c62ee327-1a9c-4eab-8a96-6f963ed17d89", 00:19:39.341 "assigned_rate_limits": { 00:19:39.341 "rw_ios_per_sec": 0, 00:19:39.341 "rw_mbytes_per_sec": 0, 00:19:39.341 "r_mbytes_per_sec": 0, 00:19:39.341 "w_mbytes_per_sec": 0 00:19:39.341 }, 00:19:39.341 "claimed": true, 00:19:39.341 "claim_type": "exclusive_write", 00:19:39.341 "zoned": false, 00:19:39.341 "supported_io_types": { 00:19:39.341 "read": true, 00:19:39.341 "write": true, 00:19:39.341 "unmap": true, 00:19:39.341 "write_zeroes": true, 00:19:39.341 "flush": true, 00:19:39.341 "reset": true, 00:19:39.341 "compare": false, 00:19:39.341 "compare_and_write": false, 00:19:39.341 "abort": true, 00:19:39.341 "nvme_admin": false, 00:19:39.341 "nvme_io": false 00:19:39.341 }, 00:19:39.341 "memory_domains": [ 00:19:39.341 { 00:19:39.341 "dma_device_id": "system", 00:19:39.341 "dma_device_type": 1 00:19:39.341 }, 00:19:39.341 { 00:19:39.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.341 "dma_device_type": 2 00:19:39.341 } 00:19:39.341 ], 00:19:39.341 "driver_specific": {} 00:19:39.341 } 00:19:39.341 ] 00:19:39.341 00:33:33 -- common/autotest_common.sh@893 -- # return 0 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.341 00:33:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.598 00:33:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:39.598 "name": "Existed_Raid", 00:19:39.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.598 "strip_size_kb": 0, 00:19:39.598 "state": "configuring", 00:19:39.598 "raid_level": "raid1", 00:19:39.598 "superblock": false, 00:19:39.598 "num_base_bdevs": 3, 00:19:39.598 "num_base_bdevs_discovered": 2, 00:19:39.598 "num_base_bdevs_operational": 3, 00:19:39.598 "base_bdevs_list": [ 00:19:39.598 { 00:19:39.598 "name": "BaseBdev1", 00:19:39.598 "uuid": "aff40af5-2932-4744-b69e-7ad31e532d20", 00:19:39.598 "is_configured": true, 00:19:39.598 "data_offset": 0, 00:19:39.598 "data_size": 65536 00:19:39.598 }, 00:19:39.598 { 00:19:39.598 "name": "BaseBdev2", 00:19:39.598 "uuid": "c62ee327-1a9c-4eab-8a96-6f963ed17d89", 00:19:39.598 "is_configured": true, 00:19:39.598 "data_offset": 0, 00:19:39.598 "data_size": 65536 00:19:39.598 }, 00:19:39.598 { 00:19:39.598 "name": "BaseBdev3", 00:19:39.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.598 "is_configured": false, 00:19:39.598 "data_offset": 0, 00:19:39.598 "data_size": 0 00:19:39.598 } 00:19:39.598 ] 00:19:39.598 }' 00:19:39.598 00:33:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:39.598 00:33:33 -- common/autotest_common.sh@10 -- # set +x 00:19:40.532 00:33:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:40.532 [2024-04-24 00:33:34.249145] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:40.532 [2024-04-24 00:33:34.249460] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:19:40.532 [2024-04-24 00:33:34.249505] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:40.532 [2024-04-24 00:33:34.249772] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:40.532 [2024-04-24 00:33:34.250238] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:19:40.532 [2024-04-24 00:33:34.250350] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:19:40.532 [2024-04-24 00:33:34.250694] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.532 BaseBdev3 00:19:40.532 00:33:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:40.532 00:33:34 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:19:40.532 00:33:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:40.532 00:33:34 -- common/autotest_common.sh@887 -- # local i 00:19:40.532 00:33:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:40.532 00:33:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:40.532 00:33:34 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:40.791 00:33:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:41.049 [ 00:19:41.049 { 00:19:41.049 "name": "BaseBdev3", 00:19:41.049 "aliases": [ 00:19:41.049 "9f92c818-f5b7-4760-a16c-c8ae4a2708b1" 00:19:41.049 ], 00:19:41.049 "product_name": "Malloc disk", 00:19:41.049 "block_size": 512, 00:19:41.049 "num_blocks": 65536, 00:19:41.049 "uuid": "9f92c818-f5b7-4760-a16c-c8ae4a2708b1", 00:19:41.049 "assigned_rate_limits": { 00:19:41.049 "rw_ios_per_sec": 0, 00:19:41.049 "rw_mbytes_per_sec": 0, 00:19:41.049 "r_mbytes_per_sec": 0, 00:19:41.049 "w_mbytes_per_sec": 0 00:19:41.049 }, 00:19:41.049 "claimed": true, 00:19:41.049 "claim_type": "exclusive_write", 00:19:41.049 "zoned": false, 00:19:41.049 "supported_io_types": { 00:19:41.049 "read": true, 00:19:41.049 "write": true, 00:19:41.049 "unmap": true, 00:19:41.049 "write_zeroes": true, 00:19:41.049 "flush": true, 00:19:41.049 "reset": true, 00:19:41.049 "compare": false, 00:19:41.049 "compare_and_write": false, 00:19:41.049 "abort": true, 00:19:41.049 "nvme_admin": false, 00:19:41.049 "nvme_io": false 00:19:41.049 }, 00:19:41.049 "memory_domains": [ 00:19:41.049 { 00:19:41.049 "dma_device_id": "system", 00:19:41.049 "dma_device_type": 1 00:19:41.049 }, 00:19:41.049 { 00:19:41.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.049 "dma_device_type": 2 00:19:41.049 } 00:19:41.049 ], 00:19:41.049 "driver_specific": {} 00:19:41.049 } 00:19:41.049 ] 00:19:41.049 00:33:34 -- common/autotest_common.sh@893 -- # return 0 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.049 00:33:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.308 00:33:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.308 "name": "Existed_Raid", 00:19:41.308 "uuid": "21e785b5-5865-43e2-a69b-fea9d9ba290e", 00:19:41.308 "strip_size_kb": 0, 00:19:41.308 "state": "online", 00:19:41.308 "raid_level": "raid1", 00:19:41.308 "superblock": false, 00:19:41.308 "num_base_bdevs": 3, 00:19:41.308 "num_base_bdevs_discovered": 3, 00:19:41.308 "num_base_bdevs_operational": 3, 00:19:41.308 "base_bdevs_list": [ 00:19:41.308 { 00:19:41.308 "name": "BaseBdev1", 00:19:41.308 "uuid": "aff40af5-2932-4744-b69e-7ad31e532d20", 00:19:41.308 "is_configured": true, 00:19:41.308 "data_offset": 0, 00:19:41.308 "data_size": 65536 00:19:41.308 }, 00:19:41.308 { 00:19:41.308 "name": "BaseBdev2", 00:19:41.308 "uuid": "c62ee327-1a9c-4eab-8a96-6f963ed17d89", 00:19:41.308 "is_configured": true, 00:19:41.308 "data_offset": 0, 00:19:41.308 "data_size": 65536 00:19:41.308 }, 00:19:41.308 { 00:19:41.308 "name": "BaseBdev3", 00:19:41.308 "uuid": "9f92c818-f5b7-4760-a16c-c8ae4a2708b1", 00:19:41.308 "is_configured": true, 00:19:41.308 "data_offset": 0, 00:19:41.308 "data_size": 65536 00:19:41.308 } 00:19:41.308 ] 00:19:41.308 }' 00:19:41.308 00:33:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.308 00:33:34 -- common/autotest_common.sh@10 -- # set +x 00:19:41.874 00:33:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:42.133 [2024-04-24 00:33:35.747767] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.133 00:33:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.392 00:33:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.392 "name": "Existed_Raid", 00:19:42.392 "uuid": "21e785b5-5865-43e2-a69b-fea9d9ba290e", 00:19:42.392 "strip_size_kb": 0, 00:19:42.392 "state": "online", 00:19:42.392 "raid_level": "raid1", 00:19:42.392 "superblock": false, 00:19:42.392 "num_base_bdevs": 3, 00:19:42.392 "num_base_bdevs_discovered": 2, 00:19:42.392 "num_base_bdevs_operational": 2, 00:19:42.392 "base_bdevs_list": [ 00:19:42.392 { 00:19:42.392 "name": null, 00:19:42.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.392 "is_configured": false, 00:19:42.392 "data_offset": 0, 00:19:42.392 "data_size": 65536 00:19:42.392 }, 00:19:42.392 { 00:19:42.392 "name": "BaseBdev2", 00:19:42.392 "uuid": "c62ee327-1a9c-4eab-8a96-6f963ed17d89", 00:19:42.392 "is_configured": true, 00:19:42.392 "data_offset": 0, 00:19:42.392 "data_size": 65536 00:19:42.392 }, 00:19:42.392 { 00:19:42.392 "name": "BaseBdev3", 00:19:42.392 "uuid": "9f92c818-f5b7-4760-a16c-c8ae4a2708b1", 00:19:42.392 "is_configured": true, 00:19:42.392 "data_offset": 0, 00:19:42.392 "data_size": 65536 00:19:42.392 } 00:19:42.392 ] 00:19:42.392 }' 00:19:42.392 00:33:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.392 00:33:36 -- common/autotest_common.sh@10 -- # set +x 00:19:43.369 00:33:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:43.369 00:33:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:43.369 00:33:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.369 00:33:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:43.369 00:33:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:43.369 00:33:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:43.369 00:33:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:43.626 [2024-04-24 00:33:37.263489] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:43.626 00:33:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:43.626 00:33:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:43.626 00:33:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.626 00:33:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:43.883 00:33:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:43.883 00:33:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:43.883 00:33:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:44.141 [2024-04-24 00:33:37.839899] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:44.141 [2024-04-24 00:33:37.840213] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.399 [2024-04-24 00:33:37.953032] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.399 [2024-04-24 00:33:37.953425] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.399 [2024-04-24 00:33:37.953538] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:19:44.399 00:33:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:44.399 00:33:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:44.399 00:33:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.399 00:33:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:44.657 00:33:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:44.657 00:33:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:44.657 00:33:38 -- bdev/bdev_raid.sh@287 -- # killprocess 125563 00:19:44.657 00:33:38 -- common/autotest_common.sh@936 -- # '[' -z 125563 ']' 00:19:44.657 00:33:38 -- common/autotest_common.sh@940 -- # kill -0 125563 00:19:44.657 00:33:38 -- common/autotest_common.sh@941 -- # uname 00:19:44.657 00:33:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:44.657 00:33:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125563 00:19:44.657 killing process with pid 125563 00:19:44.657 00:33:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:44.657 00:33:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:44.657 00:33:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125563' 00:19:44.657 00:33:38 -- common/autotest_common.sh@955 -- # kill 125563 00:19:44.657 [2024-04-24 00:33:38.314924] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:44.657 00:33:38 -- common/autotest_common.sh@960 -- # wait 125563 00:19:44.657 [2024-04-24 00:33:38.315085] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:46.059 ************************************ 00:19:46.059 END TEST raid_state_function_test 00:19:46.059 ************************************ 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:46.059 00:19:46.059 real 0m13.171s 00:19:46.059 user 0m22.630s 00:19:46.059 sys 0m1.675s 00:19:46.059 00:33:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:46.059 00:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:19:46.059 00:33:39 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:46.059 00:33:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:46.059 00:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:46.059 ************************************ 00:19:46.059 START TEST raid_state_function_test_sb 00:19:46.059 ************************************ 00:19:46.059 00:33:39 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 3 true 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@226 -- # raid_pid=125960 00:19:46.059 Process raid pid: 125960 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125960' 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125960 /var/tmp/spdk-raid.sock 00:19:46.059 00:33:39 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:46.060 00:33:39 -- common/autotest_common.sh@817 -- # '[' -z 125960 ']' 00:19:46.060 00:33:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:46.060 00:33:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:46.060 00:33:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:46.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:46.060 00:33:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:46.060 00:33:39 -- common/autotest_common.sh@10 -- # set +x 00:19:46.318 [2024-04-24 00:33:39.860802] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:19:46.318 [2024-04-24 00:33:39.861030] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.318 [2024-04-24 00:33:40.044693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.576 [2024-04-24 00:33:40.329284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.835 [2024-04-24 00:33:40.596940] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:47.092 00:33:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:47.092 00:33:40 -- common/autotest_common.sh@850 -- # return 0 00:19:47.092 00:33:40 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:47.350 [2024-04-24 00:33:41.001127] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:47.350 [2024-04-24 00:33:41.001209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:47.350 [2024-04-24 00:33:41.001222] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:47.350 [2024-04-24 00:33:41.001243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:47.350 [2024-04-24 00:33:41.001251] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:47.350 [2024-04-24 00:33:41.001294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:47.350 00:33:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:47.350 00:33:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:47.350 00:33:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:47.350 00:33:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:47.350 00:33:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:47.350 00:33:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:47.350 00:33:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.350 00:33:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.350 00:33:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.350 00:33:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.350 00:33:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.350 00:33:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.609 00:33:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:47.609 "name": "Existed_Raid", 00:19:47.609 "uuid": "69182abd-716a-45ec-bd41-bb1ca2255347", 00:19:47.609 "strip_size_kb": 0, 00:19:47.609 "state": "configuring", 00:19:47.609 "raid_level": "raid1", 00:19:47.609 "superblock": true, 00:19:47.609 "num_base_bdevs": 3, 00:19:47.609 "num_base_bdevs_discovered": 0, 00:19:47.609 "num_base_bdevs_operational": 3, 00:19:47.609 "base_bdevs_list": [ 00:19:47.609 { 00:19:47.609 "name": "BaseBdev1", 00:19:47.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.609 "is_configured": false, 00:19:47.609 "data_offset": 0, 00:19:47.609 "data_size": 0 00:19:47.609 }, 00:19:47.609 { 00:19:47.609 "name": "BaseBdev2", 00:19:47.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.609 "is_configured": false, 00:19:47.609 "data_offset": 0, 00:19:47.609 "data_size": 0 00:19:47.609 }, 00:19:47.609 { 00:19:47.609 "name": "BaseBdev3", 00:19:47.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.609 "is_configured": false, 00:19:47.609 "data_offset": 0, 00:19:47.609 "data_size": 0 00:19:47.609 } 00:19:47.609 ] 00:19:47.609 }' 00:19:47.609 00:33:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:47.609 00:33:41 -- common/autotest_common.sh@10 -- # set +x 00:19:48.176 00:33:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:48.434 [2024-04-24 00:33:42.213331] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:48.434 [2024-04-24 00:33:42.213377] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:19:48.692 00:33:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:48.692 [2024-04-24 00:33:42.413391] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:48.693 [2024-04-24 00:33:42.413475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:48.693 [2024-04-24 00:33:42.413487] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:48.693 [2024-04-24 00:33:42.413507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:48.693 [2024-04-24 00:33:42.413515] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:48.693 [2024-04-24 00:33:42.413540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:48.693 00:33:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:48.951 [2024-04-24 00:33:42.651239] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:48.951 BaseBdev1 00:19:48.951 00:33:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:48.951 00:33:42 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:48.951 00:33:42 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:48.951 00:33:42 -- common/autotest_common.sh@887 -- # local i 00:19:48.951 00:33:42 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:48.951 00:33:42 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:48.951 00:33:42 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:49.209 00:33:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:49.497 [ 00:19:49.497 { 00:19:49.497 "name": "BaseBdev1", 00:19:49.497 "aliases": [ 00:19:49.497 "9a8dff4d-1cf7-4fb8-8004-60555c1ecb10" 00:19:49.497 ], 00:19:49.497 "product_name": "Malloc disk", 00:19:49.497 "block_size": 512, 00:19:49.497 "num_blocks": 65536, 00:19:49.497 "uuid": "9a8dff4d-1cf7-4fb8-8004-60555c1ecb10", 00:19:49.497 "assigned_rate_limits": { 00:19:49.497 "rw_ios_per_sec": 0, 00:19:49.497 "rw_mbytes_per_sec": 0, 00:19:49.497 "r_mbytes_per_sec": 0, 00:19:49.497 "w_mbytes_per_sec": 0 00:19:49.497 }, 00:19:49.497 "claimed": true, 00:19:49.497 "claim_type": "exclusive_write", 00:19:49.498 "zoned": false, 00:19:49.498 "supported_io_types": { 00:19:49.498 "read": true, 00:19:49.498 "write": true, 00:19:49.498 "unmap": true, 00:19:49.498 "write_zeroes": true, 00:19:49.498 "flush": true, 00:19:49.498 "reset": true, 00:19:49.498 "compare": false, 00:19:49.498 "compare_and_write": false, 00:19:49.498 "abort": true, 00:19:49.498 "nvme_admin": false, 00:19:49.498 "nvme_io": false 00:19:49.498 }, 00:19:49.498 "memory_domains": [ 00:19:49.498 { 00:19:49.498 "dma_device_id": "system", 00:19:49.498 "dma_device_type": 1 00:19:49.498 }, 00:19:49.498 { 00:19:49.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.498 "dma_device_type": 2 00:19:49.498 } 00:19:49.498 ], 00:19:49.498 "driver_specific": {} 00:19:49.498 } 00:19:49.498 ] 00:19:49.498 00:33:43 -- common/autotest_common.sh@893 -- # return 0 00:19:49.498 00:33:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:49.498 00:33:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:49.498 00:33:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:49.498 00:33:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.498 00:33:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.498 00:33:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:49.498 00:33:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.498 00:33:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.498 00:33:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.498 00:33:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.498 00:33:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.498 00:33:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.755 00:33:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:49.755 "name": "Existed_Raid", 00:19:49.755 "uuid": "563deb64-853b-44e1-a401-9138fae7d858", 00:19:49.755 "strip_size_kb": 0, 00:19:49.755 "state": "configuring", 00:19:49.755 "raid_level": "raid1", 00:19:49.755 "superblock": true, 00:19:49.755 "num_base_bdevs": 3, 00:19:49.755 "num_base_bdevs_discovered": 1, 00:19:49.755 "num_base_bdevs_operational": 3, 00:19:49.755 "base_bdevs_list": [ 00:19:49.755 { 00:19:49.755 "name": "BaseBdev1", 00:19:49.755 "uuid": "9a8dff4d-1cf7-4fb8-8004-60555c1ecb10", 00:19:49.755 "is_configured": true, 00:19:49.755 "data_offset": 2048, 00:19:49.755 "data_size": 63488 00:19:49.755 }, 00:19:49.755 { 00:19:49.755 "name": "BaseBdev2", 00:19:49.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.755 "is_configured": false, 00:19:49.755 "data_offset": 0, 00:19:49.755 "data_size": 0 00:19:49.755 }, 00:19:49.755 { 00:19:49.755 "name": "BaseBdev3", 00:19:49.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.755 "is_configured": false, 00:19:49.755 "data_offset": 0, 00:19:49.755 "data_size": 0 00:19:49.755 } 00:19:49.755 ] 00:19:49.755 }' 00:19:49.755 00:33:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:49.755 00:33:43 -- common/autotest_common.sh@10 -- # set +x 00:19:50.322 00:33:44 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:50.580 [2024-04-24 00:33:44.287742] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:50.580 [2024-04-24 00:33:44.287801] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:19:50.580 00:33:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:50.580 00:33:44 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:51.146 00:33:44 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:51.405 BaseBdev1 00:19:51.405 00:33:44 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:51.405 00:33:44 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:51.405 00:33:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:51.405 00:33:44 -- common/autotest_common.sh@887 -- # local i 00:19:51.405 00:33:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:51.405 00:33:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:51.405 00:33:44 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:51.663 00:33:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:51.921 [ 00:19:51.921 { 00:19:51.921 "name": "BaseBdev1", 00:19:51.921 "aliases": [ 00:19:51.921 "a7e8e429-e4c3-4b2b-9430-bd791eb093af" 00:19:51.921 ], 00:19:51.921 "product_name": "Malloc disk", 00:19:51.921 "block_size": 512, 00:19:51.921 "num_blocks": 65536, 00:19:51.921 "uuid": "a7e8e429-e4c3-4b2b-9430-bd791eb093af", 00:19:51.921 "assigned_rate_limits": { 00:19:51.921 "rw_ios_per_sec": 0, 00:19:51.921 "rw_mbytes_per_sec": 0, 00:19:51.921 "r_mbytes_per_sec": 0, 00:19:51.921 "w_mbytes_per_sec": 0 00:19:51.921 }, 00:19:51.921 "claimed": false, 00:19:51.921 "zoned": false, 00:19:51.921 "supported_io_types": { 00:19:51.921 "read": true, 00:19:51.921 "write": true, 00:19:51.921 "unmap": true, 00:19:51.921 "write_zeroes": true, 00:19:51.921 "flush": true, 00:19:51.921 "reset": true, 00:19:51.921 "compare": false, 00:19:51.921 "compare_and_write": false, 00:19:51.921 "abort": true, 00:19:51.921 "nvme_admin": false, 00:19:51.921 "nvme_io": false 00:19:51.921 }, 00:19:51.921 "memory_domains": [ 00:19:51.921 { 00:19:51.921 "dma_device_id": "system", 00:19:51.921 "dma_device_type": 1 00:19:51.921 }, 00:19:51.921 { 00:19:51.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.921 "dma_device_type": 2 00:19:51.921 } 00:19:51.921 ], 00:19:51.921 "driver_specific": {} 00:19:51.921 } 00:19:51.921 ] 00:19:51.921 00:33:45 -- common/autotest_common.sh@893 -- # return 0 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:51.921 [2024-04-24 00:33:45.683560] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:51.921 [2024-04-24 00:33:45.685807] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:51.921 [2024-04-24 00:33:45.685872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:51.921 [2024-04-24 00:33:45.685883] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:51.921 [2024-04-24 00:33:45.685909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.921 00:33:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.488 00:33:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:52.488 "name": "Existed_Raid", 00:19:52.488 "uuid": "327613bb-b02b-4cae-95eb-a3f0bb37a27c", 00:19:52.488 "strip_size_kb": 0, 00:19:52.488 "state": "configuring", 00:19:52.488 "raid_level": "raid1", 00:19:52.488 "superblock": true, 00:19:52.488 "num_base_bdevs": 3, 00:19:52.488 "num_base_bdevs_discovered": 1, 00:19:52.488 "num_base_bdevs_operational": 3, 00:19:52.488 "base_bdevs_list": [ 00:19:52.488 { 00:19:52.488 "name": "BaseBdev1", 00:19:52.488 "uuid": "a7e8e429-e4c3-4b2b-9430-bd791eb093af", 00:19:52.488 "is_configured": true, 00:19:52.488 "data_offset": 2048, 00:19:52.488 "data_size": 63488 00:19:52.488 }, 00:19:52.488 { 00:19:52.488 "name": "BaseBdev2", 00:19:52.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.488 "is_configured": false, 00:19:52.488 "data_offset": 0, 00:19:52.488 "data_size": 0 00:19:52.488 }, 00:19:52.488 { 00:19:52.488 "name": "BaseBdev3", 00:19:52.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.488 "is_configured": false, 00:19:52.488 "data_offset": 0, 00:19:52.488 "data_size": 0 00:19:52.488 } 00:19:52.488 ] 00:19:52.488 }' 00:19:52.488 00:33:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:52.488 00:33:45 -- common/autotest_common.sh@10 -- # set +x 00:19:53.053 00:33:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:53.312 [2024-04-24 00:33:46.968628] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:53.312 BaseBdev2 00:19:53.312 00:33:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:53.312 00:33:46 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:19:53.312 00:33:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:53.312 00:33:46 -- common/autotest_common.sh@887 -- # local i 00:19:53.312 00:33:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:53.312 00:33:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:53.312 00:33:46 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:53.570 00:33:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:53.828 [ 00:19:53.828 { 00:19:53.828 "name": "BaseBdev2", 00:19:53.828 "aliases": [ 00:19:53.828 "7bdf6983-6c27-41d9-ac06-833af0d3d512" 00:19:53.828 ], 00:19:53.828 "product_name": "Malloc disk", 00:19:53.828 "block_size": 512, 00:19:53.828 "num_blocks": 65536, 00:19:53.828 "uuid": "7bdf6983-6c27-41d9-ac06-833af0d3d512", 00:19:53.828 "assigned_rate_limits": { 00:19:53.828 "rw_ios_per_sec": 0, 00:19:53.828 "rw_mbytes_per_sec": 0, 00:19:53.828 "r_mbytes_per_sec": 0, 00:19:53.828 "w_mbytes_per_sec": 0 00:19:53.828 }, 00:19:53.828 "claimed": true, 00:19:53.828 "claim_type": "exclusive_write", 00:19:53.828 "zoned": false, 00:19:53.828 "supported_io_types": { 00:19:53.829 "read": true, 00:19:53.829 "write": true, 00:19:53.829 "unmap": true, 00:19:53.829 "write_zeroes": true, 00:19:53.829 "flush": true, 00:19:53.829 "reset": true, 00:19:53.829 "compare": false, 00:19:53.829 "compare_and_write": false, 00:19:53.829 "abort": true, 00:19:53.829 "nvme_admin": false, 00:19:53.829 "nvme_io": false 00:19:53.829 }, 00:19:53.829 "memory_domains": [ 00:19:53.829 { 00:19:53.829 "dma_device_id": "system", 00:19:53.829 "dma_device_type": 1 00:19:53.829 }, 00:19:53.829 { 00:19:53.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.829 "dma_device_type": 2 00:19:53.829 } 00:19:53.829 ], 00:19:53.829 "driver_specific": {} 00:19:53.829 } 00:19:53.829 ] 00:19:53.829 00:33:47 -- common/autotest_common.sh@893 -- # return 0 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.829 00:33:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.086 00:33:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:54.086 "name": "Existed_Raid", 00:19:54.086 "uuid": "327613bb-b02b-4cae-95eb-a3f0bb37a27c", 00:19:54.086 "strip_size_kb": 0, 00:19:54.086 "state": "configuring", 00:19:54.086 "raid_level": "raid1", 00:19:54.086 "superblock": true, 00:19:54.086 "num_base_bdevs": 3, 00:19:54.086 "num_base_bdevs_discovered": 2, 00:19:54.086 "num_base_bdevs_operational": 3, 00:19:54.086 "base_bdevs_list": [ 00:19:54.086 { 00:19:54.086 "name": "BaseBdev1", 00:19:54.086 "uuid": "a7e8e429-e4c3-4b2b-9430-bd791eb093af", 00:19:54.086 "is_configured": true, 00:19:54.086 "data_offset": 2048, 00:19:54.086 "data_size": 63488 00:19:54.086 }, 00:19:54.086 { 00:19:54.086 "name": "BaseBdev2", 00:19:54.086 "uuid": "7bdf6983-6c27-41d9-ac06-833af0d3d512", 00:19:54.086 "is_configured": true, 00:19:54.086 "data_offset": 2048, 00:19:54.086 "data_size": 63488 00:19:54.086 }, 00:19:54.086 { 00:19:54.086 "name": "BaseBdev3", 00:19:54.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.086 "is_configured": false, 00:19:54.086 "data_offset": 0, 00:19:54.086 "data_size": 0 00:19:54.086 } 00:19:54.086 ] 00:19:54.086 }' 00:19:54.086 00:33:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:54.086 00:33:47 -- common/autotest_common.sh@10 -- # set +x 00:19:55.018 00:33:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:55.018 [2024-04-24 00:33:48.774038] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:55.018 [2024-04-24 00:33:48.774522] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:19:55.018 [2024-04-24 00:33:48.774643] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:55.018 [2024-04-24 00:33:48.774808] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:55.018 [2024-04-24 00:33:48.775364] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:19:55.018 [2024-04-24 00:33:48.775482] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:19:55.018 [2024-04-24 00:33:48.775744] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.018 BaseBdev3 00:19:55.018 00:33:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:55.018 00:33:48 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:19:55.018 00:33:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:55.018 00:33:48 -- common/autotest_common.sh@887 -- # local i 00:19:55.018 00:33:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:55.018 00:33:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:55.018 00:33:48 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:55.584 00:33:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:55.584 [ 00:19:55.584 { 00:19:55.584 "name": "BaseBdev3", 00:19:55.584 "aliases": [ 00:19:55.584 "887c2cb9-e011-4e40-9248-6a0267ec17d2" 00:19:55.584 ], 00:19:55.584 "product_name": "Malloc disk", 00:19:55.584 "block_size": 512, 00:19:55.584 "num_blocks": 65536, 00:19:55.584 "uuid": "887c2cb9-e011-4e40-9248-6a0267ec17d2", 00:19:55.584 "assigned_rate_limits": { 00:19:55.584 "rw_ios_per_sec": 0, 00:19:55.584 "rw_mbytes_per_sec": 0, 00:19:55.584 "r_mbytes_per_sec": 0, 00:19:55.584 "w_mbytes_per_sec": 0 00:19:55.584 }, 00:19:55.584 "claimed": true, 00:19:55.584 "claim_type": "exclusive_write", 00:19:55.584 "zoned": false, 00:19:55.584 "supported_io_types": { 00:19:55.584 "read": true, 00:19:55.584 "write": true, 00:19:55.584 "unmap": true, 00:19:55.584 "write_zeroes": true, 00:19:55.584 "flush": true, 00:19:55.584 "reset": true, 00:19:55.584 "compare": false, 00:19:55.584 "compare_and_write": false, 00:19:55.584 "abort": true, 00:19:55.584 "nvme_admin": false, 00:19:55.584 "nvme_io": false 00:19:55.584 }, 00:19:55.584 "memory_domains": [ 00:19:55.584 { 00:19:55.584 "dma_device_id": "system", 00:19:55.584 "dma_device_type": 1 00:19:55.584 }, 00:19:55.584 { 00:19:55.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.584 "dma_device_type": 2 00:19:55.584 } 00:19:55.584 ], 00:19:55.584 "driver_specific": {} 00:19:55.584 } 00:19:55.584 ] 00:19:55.584 00:33:49 -- common/autotest_common.sh@893 -- # return 0 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.584 00:33:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.842 00:33:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:55.842 "name": "Existed_Raid", 00:19:55.842 "uuid": "327613bb-b02b-4cae-95eb-a3f0bb37a27c", 00:19:55.842 "strip_size_kb": 0, 00:19:55.842 "state": "online", 00:19:55.842 "raid_level": "raid1", 00:19:55.842 "superblock": true, 00:19:55.842 "num_base_bdevs": 3, 00:19:55.842 "num_base_bdevs_discovered": 3, 00:19:55.842 "num_base_bdevs_operational": 3, 00:19:55.842 "base_bdevs_list": [ 00:19:55.842 { 00:19:55.842 "name": "BaseBdev1", 00:19:55.842 "uuid": "a7e8e429-e4c3-4b2b-9430-bd791eb093af", 00:19:55.842 "is_configured": true, 00:19:55.842 "data_offset": 2048, 00:19:55.842 "data_size": 63488 00:19:55.842 }, 00:19:55.842 { 00:19:55.842 "name": "BaseBdev2", 00:19:55.842 "uuid": "7bdf6983-6c27-41d9-ac06-833af0d3d512", 00:19:55.842 "is_configured": true, 00:19:55.842 "data_offset": 2048, 00:19:55.842 "data_size": 63488 00:19:55.842 }, 00:19:55.842 { 00:19:55.842 "name": "BaseBdev3", 00:19:55.842 "uuid": "887c2cb9-e011-4e40-9248-6a0267ec17d2", 00:19:55.842 "is_configured": true, 00:19:55.842 "data_offset": 2048, 00:19:55.842 "data_size": 63488 00:19:55.842 } 00:19:55.842 ] 00:19:55.842 }' 00:19:55.842 00:33:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:55.842 00:33:49 -- common/autotest_common.sh@10 -- # set +x 00:19:56.407 00:33:50 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:56.677 [2024-04-24 00:33:50.378529] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:56.935 "name": "Existed_Raid", 00:19:56.935 "uuid": "327613bb-b02b-4cae-95eb-a3f0bb37a27c", 00:19:56.935 "strip_size_kb": 0, 00:19:56.935 "state": "online", 00:19:56.935 "raid_level": "raid1", 00:19:56.935 "superblock": true, 00:19:56.935 "num_base_bdevs": 3, 00:19:56.935 "num_base_bdevs_discovered": 2, 00:19:56.935 "num_base_bdevs_operational": 2, 00:19:56.935 "base_bdevs_list": [ 00:19:56.935 { 00:19:56.935 "name": null, 00:19:56.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.935 "is_configured": false, 00:19:56.935 "data_offset": 2048, 00:19:56.935 "data_size": 63488 00:19:56.935 }, 00:19:56.935 { 00:19:56.935 "name": "BaseBdev2", 00:19:56.935 "uuid": "7bdf6983-6c27-41d9-ac06-833af0d3d512", 00:19:56.935 "is_configured": true, 00:19:56.935 "data_offset": 2048, 00:19:56.935 "data_size": 63488 00:19:56.935 }, 00:19:56.935 { 00:19:56.935 "name": "BaseBdev3", 00:19:56.935 "uuid": "887c2cb9-e011-4e40-9248-6a0267ec17d2", 00:19:56.935 "is_configured": true, 00:19:56.935 "data_offset": 2048, 00:19:56.935 "data_size": 63488 00:19:56.935 } 00:19:56.935 ] 00:19:56.935 }' 00:19:56.935 00:33:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:56.935 00:33:50 -- common/autotest_common.sh@10 -- # set +x 00:19:57.868 00:33:51 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:57.868 00:33:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:57.868 00:33:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.868 00:33:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:57.868 00:33:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:57.868 00:33:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:58.125 00:33:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:58.384 [2024-04-24 00:33:51.920161] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:58.384 00:33:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:58.384 00:33:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:58.384 00:33:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.384 00:33:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:58.642 00:33:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:58.642 00:33:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:58.642 00:33:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:58.941 [2024-04-24 00:33:52.545100] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:58.941 [2024-04-24 00:33:52.545375] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.941 [2024-04-24 00:33:52.660107] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.941 [2024-04-24 00:33:52.660436] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.941 [2024-04-24 00:33:52.660565] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:19:58.941 00:33:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:58.941 00:33:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:58.941 00:33:52 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.941 00:33:52 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:59.225 00:33:52 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:59.225 00:33:52 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:59.225 00:33:52 -- bdev/bdev_raid.sh@287 -- # killprocess 125960 00:19:59.225 00:33:52 -- common/autotest_common.sh@936 -- # '[' -z 125960 ']' 00:19:59.225 00:33:52 -- common/autotest_common.sh@940 -- # kill -0 125960 00:19:59.225 00:33:52 -- common/autotest_common.sh@941 -- # uname 00:19:59.225 00:33:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:59.225 00:33:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125960 00:19:59.225 00:33:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:59.225 00:33:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:59.225 00:33:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125960' 00:19:59.225 killing process with pid 125960 00:19:59.225 00:33:53 -- common/autotest_common.sh@955 -- # kill 125960 00:19:59.225 [2024-04-24 00:33:53.005224] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:59.225 00:33:53 -- common/autotest_common.sh@960 -- # wait 125960 00:19:59.225 [2024-04-24 00:33:53.005495] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:01.124 00:20:01.124 real 0m14.642s 00:20:01.124 user 0m25.231s 00:20:01.124 sys 0m1.910s 00:20:01.124 00:33:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:01.124 00:33:54 -- common/autotest_common.sh@10 -- # set +x 00:20:01.124 ************************************ 00:20:01.124 END TEST raid_state_function_test_sb 00:20:01.124 ************************************ 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:20:01.124 00:33:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:20:01.124 00:33:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:01.124 00:33:54 -- common/autotest_common.sh@10 -- # set +x 00:20:01.124 ************************************ 00:20:01.124 START TEST raid_superblock_test 00:20:01.124 ************************************ 00:20:01.124 00:33:54 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid1 3 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@357 -- # raid_pid=126366 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:01.124 00:33:54 -- bdev/bdev_raid.sh@358 -- # waitforlisten 126366 /var/tmp/spdk-raid.sock 00:20:01.124 00:33:54 -- common/autotest_common.sh@817 -- # '[' -z 126366 ']' 00:20:01.124 00:33:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:01.124 00:33:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:01.124 00:33:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:01.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:01.124 00:33:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:01.124 00:33:54 -- common/autotest_common.sh@10 -- # set +x 00:20:01.124 [2024-04-24 00:33:54.601312] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:20:01.124 [2024-04-24 00:33:54.601802] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126366 ] 00:20:01.124 [2024-04-24 00:33:54.785486] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.416 [2024-04-24 00:33:55.056094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.674 [2024-04-24 00:33:55.298137] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.931 00:33:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:01.931 00:33:55 -- common/autotest_common.sh@850 -- # return 0 00:20:01.931 00:33:55 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:01.931 00:33:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:01.931 00:33:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:01.931 00:33:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:01.931 00:33:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:01.931 00:33:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:01.931 00:33:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:01.931 00:33:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:01.931 00:33:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:02.188 malloc1 00:20:02.188 00:33:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:02.446 [2024-04-24 00:33:56.112407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:02.446 [2024-04-24 00:33:56.112737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.446 [2024-04-24 00:33:56.112884] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:02.446 [2024-04-24 00:33:56.113034] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.446 [2024-04-24 00:33:56.115925] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.446 [2024-04-24 00:33:56.116130] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:02.446 pt1 00:20:02.446 00:33:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:02.446 00:33:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:02.446 00:33:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:02.446 00:33:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:02.446 00:33:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:02.446 00:33:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:02.446 00:33:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:02.446 00:33:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:02.446 00:33:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:02.703 malloc2 00:20:02.703 00:33:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:02.961 [2024-04-24 00:33:56.676246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:02.961 [2024-04-24 00:33:56.676558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.961 [2024-04-24 00:33:56.676727] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:02.961 [2024-04-24 00:33:56.676896] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.961 [2024-04-24 00:33:56.679946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.961 [2024-04-24 00:33:56.680144] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:02.961 pt2 00:20:02.961 00:33:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:02.961 00:33:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:02.961 00:33:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:02.961 00:33:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:02.961 00:33:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:02.961 00:33:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:02.961 00:33:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:02.961 00:33:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:02.961 00:33:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:03.219 malloc3 00:20:03.219 00:33:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:03.478 [2024-04-24 00:33:57.210042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:03.478 [2024-04-24 00:33:57.210339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.478 [2024-04-24 00:33:57.210433] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:03.478 [2024-04-24 00:33:57.210652] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.478 [2024-04-24 00:33:57.213678] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.478 [2024-04-24 00:33:57.213885] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:03.478 pt3 00:20:03.478 00:33:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:03.478 00:33:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:03.478 00:33:57 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:20:03.736 [2024-04-24 00:33:57.482357] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:03.736 [2024-04-24 00:33:57.485129] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:03.736 [2024-04-24 00:33:57.485366] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:03.736 [2024-04-24 00:33:57.485739] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:20:03.736 [2024-04-24 00:33:57.485871] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:03.736 [2024-04-24 00:33:57.486095] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:03.736 [2024-04-24 00:33:57.486622] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:20:03.736 [2024-04-24 00:33:57.486774] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:20:03.736 [2024-04-24 00:33:57.487156] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.736 00:33:57 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:03.736 00:33:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:03.736 00:33:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:03.736 00:33:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:03.736 00:33:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:03.736 00:33:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:03.736 00:33:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:03.736 00:33:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:03.736 00:33:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:03.736 00:33:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:03.736 00:33:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.736 00:33:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.994 00:33:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:03.994 "name": "raid_bdev1", 00:20:03.994 "uuid": "42ea901a-b666-4175-860f-bb43ffbc9951", 00:20:03.994 "strip_size_kb": 0, 00:20:03.994 "state": "online", 00:20:03.994 "raid_level": "raid1", 00:20:03.994 "superblock": true, 00:20:03.994 "num_base_bdevs": 3, 00:20:03.994 "num_base_bdevs_discovered": 3, 00:20:03.994 "num_base_bdevs_operational": 3, 00:20:03.994 "base_bdevs_list": [ 00:20:03.994 { 00:20:03.994 "name": "pt1", 00:20:03.994 "uuid": "c3fd7d30-59a5-5f92-b526-3c147c3b6ddd", 00:20:03.994 "is_configured": true, 00:20:03.994 "data_offset": 2048, 00:20:03.994 "data_size": 63488 00:20:03.994 }, 00:20:03.994 { 00:20:03.994 "name": "pt2", 00:20:03.994 "uuid": "bd5c3cd3-856d-500a-b7d8-74da30e6e1df", 00:20:03.994 "is_configured": true, 00:20:03.994 "data_offset": 2048, 00:20:03.994 "data_size": 63488 00:20:03.994 }, 00:20:03.994 { 00:20:03.994 "name": "pt3", 00:20:03.994 "uuid": "16761e7d-a548-55ad-b3f2-7d0ec4f41616", 00:20:03.994 "is_configured": true, 00:20:03.994 "data_offset": 2048, 00:20:03.994 "data_size": 63488 00:20:03.994 } 00:20:03.994 ] 00:20:03.994 }' 00:20:03.994 00:33:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:03.994 00:33:57 -- common/autotest_common.sh@10 -- # set +x 00:20:04.926 00:33:58 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:04.926 00:33:58 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:04.926 [2024-04-24 00:33:58.699658] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:05.183 00:33:58 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=42ea901a-b666-4175-860f-bb43ffbc9951 00:20:05.183 00:33:58 -- bdev/bdev_raid.sh@380 -- # '[' -z 42ea901a-b666-4175-860f-bb43ffbc9951 ']' 00:20:05.183 00:33:58 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:05.441 [2024-04-24 00:33:58.975411] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.441 [2024-04-24 00:33:58.975595] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:05.441 [2024-04-24 00:33:58.975763] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:05.441 [2024-04-24 00:33:58.975937] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:05.441 [2024-04-24 00:33:58.976026] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:20:05.441 00:33:58 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.441 00:33:58 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:05.441 00:33:59 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:05.441 00:33:59 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:05.441 00:33:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:05.441 00:33:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:05.699 00:33:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:05.699 00:33:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:05.957 00:33:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:05.957 00:33:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:06.216 00:33:59 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:06.216 00:33:59 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:06.474 00:34:00 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:06.474 00:34:00 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:06.475 00:34:00 -- common/autotest_common.sh@638 -- # local es=0 00:20:06.475 00:34:00 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:06.475 00:34:00 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.475 00:34:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:06.475 00:34:00 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.475 00:34:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:06.475 00:34:00 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.475 00:34:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:06.475 00:34:00 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.475 00:34:00 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:06.475 00:34:00 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:06.733 [2024-04-24 00:34:00.451726] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:06.733 [2024-04-24 00:34:00.454169] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:06.733 [2024-04-24 00:34:00.454404] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:06.733 [2024-04-24 00:34:00.454553] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:06.733 [2024-04-24 00:34:00.454730] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:06.733 [2024-04-24 00:34:00.454854] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:06.733 [2024-04-24 00:34:00.454942] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:06.733 [2024-04-24 00:34:00.455011] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:20:06.733 request: 00:20:06.733 { 00:20:06.733 "name": "raid_bdev1", 00:20:06.733 "raid_level": "raid1", 00:20:06.733 "base_bdevs": [ 00:20:06.733 "malloc1", 00:20:06.733 "malloc2", 00:20:06.733 "malloc3" 00:20:06.733 ], 00:20:06.733 "superblock": false, 00:20:06.733 "method": "bdev_raid_create", 00:20:06.733 "req_id": 1 00:20:06.733 } 00:20:06.733 Got JSON-RPC error response 00:20:06.733 response: 00:20:06.733 { 00:20:06.733 "code": -17, 00:20:06.733 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:06.733 } 00:20:06.733 00:34:00 -- common/autotest_common.sh@641 -- # es=1 00:20:06.733 00:34:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:06.733 00:34:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:06.733 00:34:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:06.733 00:34:00 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.733 00:34:00 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:06.991 00:34:00 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:06.991 00:34:00 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:06.991 00:34:00 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:07.249 [2024-04-24 00:34:01.007802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:07.249 [2024-04-24 00:34:01.008079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.249 [2024-04-24 00:34:01.008215] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:07.249 [2024-04-24 00:34:01.008321] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.249 [2024-04-24 00:34:01.010918] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.249 [2024-04-24 00:34:01.011106] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:07.249 [2024-04-24 00:34:01.011337] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:07.249 [2024-04-24 00:34:01.011479] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:07.249 pt1 00:20:07.249 00:34:01 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:07.249 00:34:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:07.249 00:34:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:07.249 00:34:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:07.249 00:34:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:07.249 00:34:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:07.249 00:34:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.249 00:34:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.249 00:34:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.249 00:34:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.249 00:34:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.249 00:34:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.507 00:34:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:07.507 "name": "raid_bdev1", 00:20:07.507 "uuid": "42ea901a-b666-4175-860f-bb43ffbc9951", 00:20:07.507 "strip_size_kb": 0, 00:20:07.507 "state": "configuring", 00:20:07.507 "raid_level": "raid1", 00:20:07.507 "superblock": true, 00:20:07.507 "num_base_bdevs": 3, 00:20:07.507 "num_base_bdevs_discovered": 1, 00:20:07.507 "num_base_bdevs_operational": 3, 00:20:07.507 "base_bdevs_list": [ 00:20:07.507 { 00:20:07.507 "name": "pt1", 00:20:07.507 "uuid": "c3fd7d30-59a5-5f92-b526-3c147c3b6ddd", 00:20:07.507 "is_configured": true, 00:20:07.507 "data_offset": 2048, 00:20:07.507 "data_size": 63488 00:20:07.507 }, 00:20:07.507 { 00:20:07.507 "name": null, 00:20:07.507 "uuid": "bd5c3cd3-856d-500a-b7d8-74da30e6e1df", 00:20:07.507 "is_configured": false, 00:20:07.507 "data_offset": 2048, 00:20:07.507 "data_size": 63488 00:20:07.507 }, 00:20:07.507 { 00:20:07.507 "name": null, 00:20:07.507 "uuid": "16761e7d-a548-55ad-b3f2-7d0ec4f41616", 00:20:07.507 "is_configured": false, 00:20:07.507 "data_offset": 2048, 00:20:07.507 "data_size": 63488 00:20:07.507 } 00:20:07.507 ] 00:20:07.507 }' 00:20:07.507 00:34:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:07.507 00:34:01 -- common/autotest_common.sh@10 -- # set +x 00:20:08.072 00:34:01 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:20:08.072 00:34:01 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:08.644 [2024-04-24 00:34:02.136085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:08.644 [2024-04-24 00:34:02.136397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.644 [2024-04-24 00:34:02.136489] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:08.644 [2024-04-24 00:34:02.136669] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.644 [2024-04-24 00:34:02.137219] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.644 [2024-04-24 00:34:02.137365] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:08.644 [2024-04-24 00:34:02.137598] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:08.644 [2024-04-24 00:34:02.137716] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:08.644 pt2 00:20:08.644 00:34:02 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:08.644 [2024-04-24 00:34:02.352259] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:08.644 00:34:02 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:08.644 00:34:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:08.644 00:34:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:08.644 00:34:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:08.644 00:34:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:08.644 00:34:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:08.644 00:34:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:08.644 00:34:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:08.644 00:34:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:08.644 00:34:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:08.644 00:34:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.644 00:34:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.904 00:34:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:08.904 "name": "raid_bdev1", 00:20:08.904 "uuid": "42ea901a-b666-4175-860f-bb43ffbc9951", 00:20:08.904 "strip_size_kb": 0, 00:20:08.904 "state": "configuring", 00:20:08.904 "raid_level": "raid1", 00:20:08.904 "superblock": true, 00:20:08.904 "num_base_bdevs": 3, 00:20:08.904 "num_base_bdevs_discovered": 1, 00:20:08.904 "num_base_bdevs_operational": 3, 00:20:08.904 "base_bdevs_list": [ 00:20:08.904 { 00:20:08.904 "name": "pt1", 00:20:08.904 "uuid": "c3fd7d30-59a5-5f92-b526-3c147c3b6ddd", 00:20:08.904 "is_configured": true, 00:20:08.904 "data_offset": 2048, 00:20:08.904 "data_size": 63488 00:20:08.904 }, 00:20:08.904 { 00:20:08.904 "name": null, 00:20:08.904 "uuid": "bd5c3cd3-856d-500a-b7d8-74da30e6e1df", 00:20:08.904 "is_configured": false, 00:20:08.904 "data_offset": 2048, 00:20:08.904 "data_size": 63488 00:20:08.904 }, 00:20:08.904 { 00:20:08.904 "name": null, 00:20:08.904 "uuid": "16761e7d-a548-55ad-b3f2-7d0ec4f41616", 00:20:08.904 "is_configured": false, 00:20:08.904 "data_offset": 2048, 00:20:08.904 "data_size": 63488 00:20:08.904 } 00:20:08.904 ] 00:20:08.904 }' 00:20:08.904 00:34:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:08.904 00:34:02 -- common/autotest_common.sh@10 -- # set +x 00:20:09.837 00:34:03 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:09.837 00:34:03 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:09.837 00:34:03 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:09.837 [2024-04-24 00:34:03.570637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:09.837 [2024-04-24 00:34:03.570932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.837 [2024-04-24 00:34:03.571079] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:09.837 [2024-04-24 00:34:03.571186] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.837 [2024-04-24 00:34:03.571702] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.837 [2024-04-24 00:34:03.571878] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:09.837 [2024-04-24 00:34:03.572103] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:09.837 [2024-04-24 00:34:03.572207] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:09.837 pt2 00:20:09.837 00:34:03 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:09.837 00:34:03 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:09.837 00:34:03 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:10.095 [2024-04-24 00:34:03.786655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:10.095 [2024-04-24 00:34:03.786929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.095 [2024-04-24 00:34:03.787084] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:10.095 [2024-04-24 00:34:03.787208] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.095 [2024-04-24 00:34:03.787781] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.095 [2024-04-24 00:34:03.787955] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:10.095 [2024-04-24 00:34:03.788240] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:10.095 [2024-04-24 00:34:03.788347] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:10.095 [2024-04-24 00:34:03.788552] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:20:10.095 [2024-04-24 00:34:03.788655] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:10.095 [2024-04-24 00:34:03.788817] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:10.095 [2024-04-24 00:34:03.789270] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:20:10.095 [2024-04-24 00:34:03.789412] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:20:10.095 [2024-04-24 00:34:03.789686] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.095 pt3 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.095 00:34:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.353 00:34:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:10.353 "name": "raid_bdev1", 00:20:10.353 "uuid": "42ea901a-b666-4175-860f-bb43ffbc9951", 00:20:10.353 "strip_size_kb": 0, 00:20:10.353 "state": "online", 00:20:10.353 "raid_level": "raid1", 00:20:10.353 "superblock": true, 00:20:10.353 "num_base_bdevs": 3, 00:20:10.353 "num_base_bdevs_discovered": 3, 00:20:10.353 "num_base_bdevs_operational": 3, 00:20:10.353 "base_bdevs_list": [ 00:20:10.353 { 00:20:10.353 "name": "pt1", 00:20:10.353 "uuid": "c3fd7d30-59a5-5f92-b526-3c147c3b6ddd", 00:20:10.353 "is_configured": true, 00:20:10.353 "data_offset": 2048, 00:20:10.353 "data_size": 63488 00:20:10.353 }, 00:20:10.353 { 00:20:10.353 "name": "pt2", 00:20:10.353 "uuid": "bd5c3cd3-856d-500a-b7d8-74da30e6e1df", 00:20:10.353 "is_configured": true, 00:20:10.353 "data_offset": 2048, 00:20:10.353 "data_size": 63488 00:20:10.353 }, 00:20:10.353 { 00:20:10.353 "name": "pt3", 00:20:10.353 "uuid": "16761e7d-a548-55ad-b3f2-7d0ec4f41616", 00:20:10.353 "is_configured": true, 00:20:10.353 "data_offset": 2048, 00:20:10.353 "data_size": 63488 00:20:10.353 } 00:20:10.353 ] 00:20:10.353 }' 00:20:10.353 00:34:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:10.353 00:34:04 -- common/autotest_common.sh@10 -- # set +x 00:20:10.919 00:34:04 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:10.919 00:34:04 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:11.225 [2024-04-24 00:34:04.867151] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.225 00:34:04 -- bdev/bdev_raid.sh@430 -- # '[' 42ea901a-b666-4175-860f-bb43ffbc9951 '!=' 42ea901a-b666-4175-860f-bb43ffbc9951 ']' 00:20:11.225 00:34:04 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:20:11.225 00:34:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:11.225 00:34:04 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:11.225 00:34:04 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:11.483 [2024-04-24 00:34:05.155051] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:11.483 00:34:05 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:11.483 00:34:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:11.483 00:34:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:11.483 00:34:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:11.483 00:34:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:11.483 00:34:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:11.483 00:34:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:11.483 00:34:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:11.483 00:34:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:11.483 00:34:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:11.483 00:34:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.483 00:34:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.742 00:34:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:11.742 "name": "raid_bdev1", 00:20:11.742 "uuid": "42ea901a-b666-4175-860f-bb43ffbc9951", 00:20:11.742 "strip_size_kb": 0, 00:20:11.742 "state": "online", 00:20:11.742 "raid_level": "raid1", 00:20:11.742 "superblock": true, 00:20:11.742 "num_base_bdevs": 3, 00:20:11.742 "num_base_bdevs_discovered": 2, 00:20:11.742 "num_base_bdevs_operational": 2, 00:20:11.742 "base_bdevs_list": [ 00:20:11.742 { 00:20:11.742 "name": null, 00:20:11.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.742 "is_configured": false, 00:20:11.742 "data_offset": 2048, 00:20:11.742 "data_size": 63488 00:20:11.742 }, 00:20:11.742 { 00:20:11.742 "name": "pt2", 00:20:11.742 "uuid": "bd5c3cd3-856d-500a-b7d8-74da30e6e1df", 00:20:11.742 "is_configured": true, 00:20:11.742 "data_offset": 2048, 00:20:11.742 "data_size": 63488 00:20:11.742 }, 00:20:11.742 { 00:20:11.742 "name": "pt3", 00:20:11.742 "uuid": "16761e7d-a548-55ad-b3f2-7d0ec4f41616", 00:20:11.742 "is_configured": true, 00:20:11.742 "data_offset": 2048, 00:20:11.742 "data_size": 63488 00:20:11.742 } 00:20:11.742 ] 00:20:11.742 }' 00:20:11.742 00:34:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:11.742 00:34:05 -- common/autotest_common.sh@10 -- # set +x 00:20:12.308 00:34:06 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:12.567 [2024-04-24 00:34:06.339301] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:12.567 [2024-04-24 00:34:06.339492] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:12.567 [2024-04-24 00:34:06.339715] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.567 [2024-04-24 00:34:06.339863] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.567 [2024-04-24 00:34:06.339970] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:20:12.826 00:34:06 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.826 00:34:06 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:20:13.084 00:34:06 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:20:13.084 00:34:06 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:20:13.084 00:34:06 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:20:13.084 00:34:06 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:13.084 00:34:06 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:13.084 00:34:06 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:13.084 00:34:06 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:13.084 00:34:06 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:13.343 00:34:07 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:13.343 00:34:07 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:13.343 00:34:07 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:20:13.343 00:34:07 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:13.343 00:34:07 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:13.602 [2024-04-24 00:34:07.303447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:13.602 [2024-04-24 00:34:07.303679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.602 [2024-04-24 00:34:07.303870] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:13.602 [2024-04-24 00:34:07.303964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.602 [2024-04-24 00:34:07.306403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.602 [2024-04-24 00:34:07.306583] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:13.602 [2024-04-24 00:34:07.306808] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:13.602 [2024-04-24 00:34:07.306960] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:13.602 pt2 00:20:13.602 00:34:07 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:13.602 00:34:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:13.602 00:34:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:13.602 00:34:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:13.602 00:34:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:13.602 00:34:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:13.602 00:34:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:13.602 00:34:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:13.602 00:34:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:13.602 00:34:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:13.603 00:34:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.603 00:34:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.861 00:34:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:13.861 "name": "raid_bdev1", 00:20:13.861 "uuid": "42ea901a-b666-4175-860f-bb43ffbc9951", 00:20:13.861 "strip_size_kb": 0, 00:20:13.861 "state": "configuring", 00:20:13.861 "raid_level": "raid1", 00:20:13.861 "superblock": true, 00:20:13.861 "num_base_bdevs": 3, 00:20:13.861 "num_base_bdevs_discovered": 1, 00:20:13.861 "num_base_bdevs_operational": 2, 00:20:13.861 "base_bdevs_list": [ 00:20:13.861 { 00:20:13.861 "name": null, 00:20:13.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.861 "is_configured": false, 00:20:13.861 "data_offset": 2048, 00:20:13.861 "data_size": 63488 00:20:13.861 }, 00:20:13.861 { 00:20:13.861 "name": "pt2", 00:20:13.861 "uuid": "bd5c3cd3-856d-500a-b7d8-74da30e6e1df", 00:20:13.861 "is_configured": true, 00:20:13.861 "data_offset": 2048, 00:20:13.861 "data_size": 63488 00:20:13.861 }, 00:20:13.861 { 00:20:13.861 "name": null, 00:20:13.861 "uuid": "16761e7d-a548-55ad-b3f2-7d0ec4f41616", 00:20:13.861 "is_configured": false, 00:20:13.861 "data_offset": 2048, 00:20:13.861 "data_size": 63488 00:20:13.861 } 00:20:13.861 ] 00:20:13.861 }' 00:20:13.861 00:34:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:13.861 00:34:07 -- common/autotest_common.sh@10 -- # set +x 00:20:14.427 00:34:08 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:14.427 00:34:08 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:14.427 00:34:08 -- bdev/bdev_raid.sh@462 -- # i=2 00:20:14.427 00:34:08 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:14.686 [2024-04-24 00:34:08.455698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:14.686 [2024-04-24 00:34:08.455942] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.686 [2024-04-24 00:34:08.456127] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:14.686 [2024-04-24 00:34:08.456232] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.686 [2024-04-24 00:34:08.456780] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.686 [2024-04-24 00:34:08.456941] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:14.686 [2024-04-24 00:34:08.457189] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:14.686 [2024-04-24 00:34:08.457314] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:14.686 [2024-04-24 00:34:08.457533] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:20:14.686 [2024-04-24 00:34:08.457632] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:14.686 [2024-04-24 00:34:08.457852] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:14.686 [2024-04-24 00:34:08.458307] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:20:14.686 [2024-04-24 00:34:08.458428] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:20:14.686 [2024-04-24 00:34:08.458660] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.686 pt3 00:20:14.944 00:34:08 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:14.944 00:34:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:14.944 00:34:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:14.944 00:34:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:14.944 00:34:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:14.944 00:34:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:14.944 00:34:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:14.944 00:34:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:14.944 00:34:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:14.944 00:34:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:14.944 00:34:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.944 00:34:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.202 00:34:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:15.202 "name": "raid_bdev1", 00:20:15.202 "uuid": "42ea901a-b666-4175-860f-bb43ffbc9951", 00:20:15.202 "strip_size_kb": 0, 00:20:15.202 "state": "online", 00:20:15.202 "raid_level": "raid1", 00:20:15.202 "superblock": true, 00:20:15.202 "num_base_bdevs": 3, 00:20:15.202 "num_base_bdevs_discovered": 2, 00:20:15.202 "num_base_bdevs_operational": 2, 00:20:15.202 "base_bdevs_list": [ 00:20:15.202 { 00:20:15.202 "name": null, 00:20:15.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.202 "is_configured": false, 00:20:15.202 "data_offset": 2048, 00:20:15.202 "data_size": 63488 00:20:15.202 }, 00:20:15.202 { 00:20:15.202 "name": "pt2", 00:20:15.202 "uuid": "bd5c3cd3-856d-500a-b7d8-74da30e6e1df", 00:20:15.202 "is_configured": true, 00:20:15.202 "data_offset": 2048, 00:20:15.202 "data_size": 63488 00:20:15.202 }, 00:20:15.202 { 00:20:15.202 "name": "pt3", 00:20:15.202 "uuid": "16761e7d-a548-55ad-b3f2-7d0ec4f41616", 00:20:15.202 "is_configured": true, 00:20:15.202 "data_offset": 2048, 00:20:15.202 "data_size": 63488 00:20:15.202 } 00:20:15.202 ] 00:20:15.202 }' 00:20:15.202 00:34:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:15.202 00:34:08 -- common/autotest_common.sh@10 -- # set +x 00:20:15.767 00:34:09 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:20:15.767 00:34:09 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:16.101 [2024-04-24 00:34:09.687945] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:16.101 [2024-04-24 00:34:09.688099] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:16.101 [2024-04-24 00:34:09.688238] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:16.101 [2024-04-24 00:34:09.688394] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:16.101 [2024-04-24 00:34:09.688489] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:20:16.101 00:34:09 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.101 00:34:09 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:20:16.361 00:34:09 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:20:16.361 00:34:09 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:20:16.361 00:34:09 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:16.619 [2024-04-24 00:34:10.204053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:16.619 [2024-04-24 00:34:10.204338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.619 [2024-04-24 00:34:10.204417] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:16.619 [2024-04-24 00:34:10.204529] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.619 [2024-04-24 00:34:10.207150] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.619 [2024-04-24 00:34:10.207324] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:16.619 [2024-04-24 00:34:10.207557] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:16.619 [2024-04-24 00:34:10.207699] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:16.620 pt1 00:20:16.620 00:34:10 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:16.620 00:34:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:16.620 00:34:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:16.620 00:34:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:16.620 00:34:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:16.620 00:34:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:16.620 00:34:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:16.620 00:34:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:16.620 00:34:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:16.620 00:34:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:16.620 00:34:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.620 00:34:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.879 00:34:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:16.879 "name": "raid_bdev1", 00:20:16.879 "uuid": "42ea901a-b666-4175-860f-bb43ffbc9951", 00:20:16.879 "strip_size_kb": 0, 00:20:16.879 "state": "configuring", 00:20:16.879 "raid_level": "raid1", 00:20:16.879 "superblock": true, 00:20:16.879 "num_base_bdevs": 3, 00:20:16.879 "num_base_bdevs_discovered": 1, 00:20:16.879 "num_base_bdevs_operational": 3, 00:20:16.879 "base_bdevs_list": [ 00:20:16.879 { 00:20:16.879 "name": "pt1", 00:20:16.879 "uuid": "c3fd7d30-59a5-5f92-b526-3c147c3b6ddd", 00:20:16.879 "is_configured": true, 00:20:16.879 "data_offset": 2048, 00:20:16.879 "data_size": 63488 00:20:16.879 }, 00:20:16.879 { 00:20:16.879 "name": null, 00:20:16.879 "uuid": "bd5c3cd3-856d-500a-b7d8-74da30e6e1df", 00:20:16.879 "is_configured": false, 00:20:16.879 "data_offset": 2048, 00:20:16.879 "data_size": 63488 00:20:16.879 }, 00:20:16.879 { 00:20:16.879 "name": null, 00:20:16.879 "uuid": "16761e7d-a548-55ad-b3f2-7d0ec4f41616", 00:20:16.879 "is_configured": false, 00:20:16.879 "data_offset": 2048, 00:20:16.879 "data_size": 63488 00:20:16.879 } 00:20:16.879 ] 00:20:16.879 }' 00:20:16.879 00:34:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:16.879 00:34:10 -- common/autotest_common.sh@10 -- # set +x 00:20:17.446 00:34:11 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:20:17.446 00:34:11 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:17.446 00:34:11 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:17.703 00:34:11 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:17.703 00:34:11 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:17.703 00:34:11 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:17.961 00:34:11 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:17.961 00:34:11 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:17.961 00:34:11 -- bdev/bdev_raid.sh@489 -- # i=2 00:20:17.961 00:34:11 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:18.218 [2024-04-24 00:34:11.912471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:18.218 [2024-04-24 00:34:11.912745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.218 [2024-04-24 00:34:11.912818] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:18.218 [2024-04-24 00:34:11.912917] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.218 [2024-04-24 00:34:11.913440] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.218 [2024-04-24 00:34:11.913614] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:18.218 [2024-04-24 00:34:11.913824] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:18.218 [2024-04-24 00:34:11.913960] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:18.218 [2024-04-24 00:34:11.914039] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:18.218 [2024-04-24 00:34:11.914091] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state configuring 00:20:18.218 [2024-04-24 00:34:11.914336] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:18.218 pt3 00:20:18.218 00:34:11 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:18.218 00:34:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:18.219 00:34:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:18.219 00:34:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:18.219 00:34:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:18.219 00:34:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:18.219 00:34:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:18.219 00:34:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:18.219 00:34:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:18.219 00:34:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:18.219 00:34:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.219 00:34:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.475 00:34:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:18.475 "name": "raid_bdev1", 00:20:18.475 "uuid": "42ea901a-b666-4175-860f-bb43ffbc9951", 00:20:18.476 "strip_size_kb": 0, 00:20:18.476 "state": "configuring", 00:20:18.476 "raid_level": "raid1", 00:20:18.476 "superblock": true, 00:20:18.476 "num_base_bdevs": 3, 00:20:18.476 "num_base_bdevs_discovered": 1, 00:20:18.476 "num_base_bdevs_operational": 2, 00:20:18.476 "base_bdevs_list": [ 00:20:18.476 { 00:20:18.476 "name": null, 00:20:18.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.476 "is_configured": false, 00:20:18.476 "data_offset": 2048, 00:20:18.476 "data_size": 63488 00:20:18.476 }, 00:20:18.476 { 00:20:18.476 "name": null, 00:20:18.476 "uuid": "bd5c3cd3-856d-500a-b7d8-74da30e6e1df", 00:20:18.476 "is_configured": false, 00:20:18.476 "data_offset": 2048, 00:20:18.476 "data_size": 63488 00:20:18.476 }, 00:20:18.476 { 00:20:18.476 "name": "pt3", 00:20:18.476 "uuid": "16761e7d-a548-55ad-b3f2-7d0ec4f41616", 00:20:18.476 "is_configured": true, 00:20:18.476 "data_offset": 2048, 00:20:18.476 "data_size": 63488 00:20:18.476 } 00:20:18.476 ] 00:20:18.476 }' 00:20:18.476 00:34:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:18.476 00:34:12 -- common/autotest_common.sh@10 -- # set +x 00:20:19.407 00:34:12 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:20:19.407 00:34:12 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:19.407 00:34:12 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:19.407 [2024-04-24 00:34:13.108714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:19.407 [2024-04-24 00:34:13.108983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.407 [2024-04-24 00:34:13.109055] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:19.407 [2024-04-24 00:34:13.109295] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.407 [2024-04-24 00:34:13.109865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.407 [2024-04-24 00:34:13.110022] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:19.407 [2024-04-24 00:34:13.110238] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:19.407 [2024-04-24 00:34:13.110355] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:19.407 [2024-04-24 00:34:13.110528] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:20:19.407 [2024-04-24 00:34:13.110621] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:19.407 [2024-04-24 00:34:13.110803] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:20:19.407 [2024-04-24 00:34:13.111254] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:20:19.407 [2024-04-24 00:34:13.111390] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011f80 00:20:19.408 [2024-04-24 00:34:13.111661] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.408 pt2 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.408 00:34:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.665 00:34:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:19.665 "name": "raid_bdev1", 00:20:19.665 "uuid": "42ea901a-b666-4175-860f-bb43ffbc9951", 00:20:19.665 "strip_size_kb": 0, 00:20:19.665 "state": "online", 00:20:19.665 "raid_level": "raid1", 00:20:19.665 "superblock": true, 00:20:19.665 "num_base_bdevs": 3, 00:20:19.665 "num_base_bdevs_discovered": 2, 00:20:19.665 "num_base_bdevs_operational": 2, 00:20:19.665 "base_bdevs_list": [ 00:20:19.665 { 00:20:19.665 "name": null, 00:20:19.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.665 "is_configured": false, 00:20:19.665 "data_offset": 2048, 00:20:19.665 "data_size": 63488 00:20:19.665 }, 00:20:19.665 { 00:20:19.665 "name": "pt2", 00:20:19.665 "uuid": "bd5c3cd3-856d-500a-b7d8-74da30e6e1df", 00:20:19.665 "is_configured": true, 00:20:19.665 "data_offset": 2048, 00:20:19.665 "data_size": 63488 00:20:19.665 }, 00:20:19.665 { 00:20:19.665 "name": "pt3", 00:20:19.665 "uuid": "16761e7d-a548-55ad-b3f2-7d0ec4f41616", 00:20:19.665 "is_configured": true, 00:20:19.665 "data_offset": 2048, 00:20:19.665 "data_size": 63488 00:20:19.665 } 00:20:19.665 ] 00:20:19.665 }' 00:20:19.665 00:34:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:19.665 00:34:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.597 00:34:14 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:20.597 00:34:14 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:20:20.597 [2024-04-24 00:34:14.381222] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:20.854 00:34:14 -- bdev/bdev_raid.sh@506 -- # '[' 42ea901a-b666-4175-860f-bb43ffbc9951 '!=' 42ea901a-b666-4175-860f-bb43ffbc9951 ']' 00:20:20.854 00:34:14 -- bdev/bdev_raid.sh@511 -- # killprocess 126366 00:20:20.854 00:34:14 -- common/autotest_common.sh@936 -- # '[' -z 126366 ']' 00:20:20.854 00:34:14 -- common/autotest_common.sh@940 -- # kill -0 126366 00:20:20.854 00:34:14 -- common/autotest_common.sh@941 -- # uname 00:20:20.854 00:34:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:20.854 00:34:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126366 00:20:20.854 killing process with pid 126366 00:20:20.854 00:34:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:20.854 00:34:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:20.854 00:34:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126366' 00:20:20.854 00:34:14 -- common/autotest_common.sh@955 -- # kill 126366 00:20:20.854 [2024-04-24 00:34:14.424387] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:20.854 00:34:14 -- common/autotest_common.sh@960 -- # wait 126366 00:20:20.854 [2024-04-24 00:34:14.424470] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:20.854 [2024-04-24 00:34:14.424527] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:20.854 [2024-04-24 00:34:14.424536] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state offline 00:20:21.111 [2024-04-24 00:34:14.749003] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:22.482 00:20:22.482 real 0m21.625s 00:20:22.482 user 0m38.537s 00:20:22.482 sys 0m3.153s 00:20:22.482 00:34:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:22.482 00:34:16 -- common/autotest_common.sh@10 -- # set +x 00:20:22.482 ************************************ 00:20:22.482 END TEST raid_superblock_test 00:20:22.482 ************************************ 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:20:22.482 00:34:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:20:22.482 00:34:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:22.482 00:34:16 -- common/autotest_common.sh@10 -- # set +x 00:20:22.482 ************************************ 00:20:22.482 START TEST raid_state_function_test 00:20:22.482 ************************************ 00:20:22.482 00:34:16 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 4 false 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=127007 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:22.482 Process raid pid: 127007 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127007' 00:20:22.482 00:34:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127007 /var/tmp/spdk-raid.sock 00:20:22.482 00:34:16 -- common/autotest_common.sh@817 -- # '[' -z 127007 ']' 00:20:22.482 00:34:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:22.482 00:34:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:22.483 00:34:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:22.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:22.483 00:34:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:22.483 00:34:16 -- common/autotest_common.sh@10 -- # set +x 00:20:22.741 [2024-04-24 00:34:16.336971] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:20:22.741 [2024-04-24 00:34:16.337375] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.741 [2024-04-24 00:34:16.513896] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.998 [2024-04-24 00:34:16.742782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.256 [2024-04-24 00:34:16.980725] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:23.822 00:34:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:23.823 00:34:17 -- common/autotest_common.sh@850 -- # return 0 00:20:23.823 00:34:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:23.823 [2024-04-24 00:34:17.603614] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:23.823 [2024-04-24 00:34:17.603902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:23.823 [2024-04-24 00:34:17.603993] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:23.823 [2024-04-24 00:34:17.604049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:23.823 [2024-04-24 00:34:17.604196] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:23.823 [2024-04-24 00:34:17.604265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:23.823 [2024-04-24 00:34:17.604377] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:23.823 [2024-04-24 00:34:17.604429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:24.081 00:34:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:24.081 00:34:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:24.081 00:34:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:24.081 00:34:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:24.081 00:34:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:24.081 00:34:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:24.081 00:34:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:24.081 00:34:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:24.081 00:34:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:24.081 00:34:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:24.081 00:34:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.081 00:34:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.339 00:34:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:24.339 "name": "Existed_Raid", 00:20:24.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.339 "strip_size_kb": 64, 00:20:24.339 "state": "configuring", 00:20:24.339 "raid_level": "raid0", 00:20:24.339 "superblock": false, 00:20:24.339 "num_base_bdevs": 4, 00:20:24.339 "num_base_bdevs_discovered": 0, 00:20:24.339 "num_base_bdevs_operational": 4, 00:20:24.339 "base_bdevs_list": [ 00:20:24.339 { 00:20:24.339 "name": "BaseBdev1", 00:20:24.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.339 "is_configured": false, 00:20:24.339 "data_offset": 0, 00:20:24.339 "data_size": 0 00:20:24.339 }, 00:20:24.339 { 00:20:24.339 "name": "BaseBdev2", 00:20:24.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.339 "is_configured": false, 00:20:24.339 "data_offset": 0, 00:20:24.339 "data_size": 0 00:20:24.339 }, 00:20:24.339 { 00:20:24.339 "name": "BaseBdev3", 00:20:24.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.339 "is_configured": false, 00:20:24.339 "data_offset": 0, 00:20:24.339 "data_size": 0 00:20:24.339 }, 00:20:24.339 { 00:20:24.339 "name": "BaseBdev4", 00:20:24.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.339 "is_configured": false, 00:20:24.339 "data_offset": 0, 00:20:24.339 "data_size": 0 00:20:24.339 } 00:20:24.339 ] 00:20:24.339 }' 00:20:24.339 00:34:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:24.340 00:34:17 -- common/autotest_common.sh@10 -- # set +x 00:20:24.906 00:34:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:25.164 [2024-04-24 00:34:18.855749] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:25.164 [2024-04-24 00:34:18.856001] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:20:25.164 00:34:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:25.422 [2024-04-24 00:34:19.103819] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:25.422 [2024-04-24 00:34:19.104047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:25.422 [2024-04-24 00:34:19.104165] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:25.422 [2024-04-24 00:34:19.104269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:25.422 [2024-04-24 00:34:19.104345] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:25.422 [2024-04-24 00:34:19.104454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:25.422 [2024-04-24 00:34:19.104525] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:25.422 [2024-04-24 00:34:19.104582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:25.422 00:34:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:25.679 [2024-04-24 00:34:19.437316] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:25.679 BaseBdev1 00:20:25.679 00:34:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:25.679 00:34:19 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:20:25.679 00:34:19 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:25.679 00:34:19 -- common/autotest_common.sh@887 -- # local i 00:20:25.679 00:34:19 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:25.679 00:34:19 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:25.679 00:34:19 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:26.246 00:34:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:26.246 [ 00:20:26.246 { 00:20:26.246 "name": "BaseBdev1", 00:20:26.246 "aliases": [ 00:20:26.246 "c1be07a1-a688-43d7-914c-fdefe93be14e" 00:20:26.246 ], 00:20:26.246 "product_name": "Malloc disk", 00:20:26.246 "block_size": 512, 00:20:26.246 "num_blocks": 65536, 00:20:26.246 "uuid": "c1be07a1-a688-43d7-914c-fdefe93be14e", 00:20:26.246 "assigned_rate_limits": { 00:20:26.246 "rw_ios_per_sec": 0, 00:20:26.246 "rw_mbytes_per_sec": 0, 00:20:26.246 "r_mbytes_per_sec": 0, 00:20:26.246 "w_mbytes_per_sec": 0 00:20:26.246 }, 00:20:26.246 "claimed": true, 00:20:26.246 "claim_type": "exclusive_write", 00:20:26.246 "zoned": false, 00:20:26.246 "supported_io_types": { 00:20:26.246 "read": true, 00:20:26.246 "write": true, 00:20:26.246 "unmap": true, 00:20:26.246 "write_zeroes": true, 00:20:26.246 "flush": true, 00:20:26.246 "reset": true, 00:20:26.246 "compare": false, 00:20:26.246 "compare_and_write": false, 00:20:26.246 "abort": true, 00:20:26.246 "nvme_admin": false, 00:20:26.246 "nvme_io": false 00:20:26.246 }, 00:20:26.246 "memory_domains": [ 00:20:26.246 { 00:20:26.246 "dma_device_id": "system", 00:20:26.246 "dma_device_type": 1 00:20:26.246 }, 00:20:26.246 { 00:20:26.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.246 "dma_device_type": 2 00:20:26.246 } 00:20:26.246 ], 00:20:26.246 "driver_specific": {} 00:20:26.246 } 00:20:26.246 ] 00:20:26.246 00:34:20 -- common/autotest_common.sh@893 -- # return 0 00:20:26.246 00:34:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:26.246 00:34:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:26.246 00:34:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:26.246 00:34:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:26.246 00:34:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:26.246 00:34:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:26.246 00:34:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:26.246 00:34:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:26.246 00:34:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:26.246 00:34:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:26.246 00:34:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.246 00:34:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.813 00:34:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:26.813 "name": "Existed_Raid", 00:20:26.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.813 "strip_size_kb": 64, 00:20:26.813 "state": "configuring", 00:20:26.813 "raid_level": "raid0", 00:20:26.813 "superblock": false, 00:20:26.813 "num_base_bdevs": 4, 00:20:26.813 "num_base_bdevs_discovered": 1, 00:20:26.813 "num_base_bdevs_operational": 4, 00:20:26.813 "base_bdevs_list": [ 00:20:26.813 { 00:20:26.813 "name": "BaseBdev1", 00:20:26.813 "uuid": "c1be07a1-a688-43d7-914c-fdefe93be14e", 00:20:26.813 "is_configured": true, 00:20:26.813 "data_offset": 0, 00:20:26.813 "data_size": 65536 00:20:26.813 }, 00:20:26.813 { 00:20:26.813 "name": "BaseBdev2", 00:20:26.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.813 "is_configured": false, 00:20:26.813 "data_offset": 0, 00:20:26.813 "data_size": 0 00:20:26.813 }, 00:20:26.813 { 00:20:26.813 "name": "BaseBdev3", 00:20:26.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.813 "is_configured": false, 00:20:26.813 "data_offset": 0, 00:20:26.813 "data_size": 0 00:20:26.813 }, 00:20:26.813 { 00:20:26.813 "name": "BaseBdev4", 00:20:26.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.813 "is_configured": false, 00:20:26.813 "data_offset": 0, 00:20:26.813 "data_size": 0 00:20:26.813 } 00:20:26.813 ] 00:20:26.813 }' 00:20:26.813 00:34:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:26.813 00:34:20 -- common/autotest_common.sh@10 -- # set +x 00:20:27.380 00:34:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:27.740 [2024-04-24 00:34:21.317769] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:27.740 [2024-04-24 00:34:21.318004] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:20:27.740 00:34:21 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:20:27.740 00:34:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:28.002 [2024-04-24 00:34:21.529882] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.002 [2024-04-24 00:34:21.532263] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:28.002 [2024-04-24 00:34:21.532462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:28.002 [2024-04-24 00:34:21.532564] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:28.002 [2024-04-24 00:34:21.532627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:28.002 [2024-04-24 00:34:21.532712] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:28.002 [2024-04-24 00:34:21.532762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:28.002 "name": "Existed_Raid", 00:20:28.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.002 "strip_size_kb": 64, 00:20:28.002 "state": "configuring", 00:20:28.002 "raid_level": "raid0", 00:20:28.002 "superblock": false, 00:20:28.002 "num_base_bdevs": 4, 00:20:28.002 "num_base_bdevs_discovered": 1, 00:20:28.002 "num_base_bdevs_operational": 4, 00:20:28.002 "base_bdevs_list": [ 00:20:28.002 { 00:20:28.002 "name": "BaseBdev1", 00:20:28.002 "uuid": "c1be07a1-a688-43d7-914c-fdefe93be14e", 00:20:28.002 "is_configured": true, 00:20:28.002 "data_offset": 0, 00:20:28.002 "data_size": 65536 00:20:28.002 }, 00:20:28.002 { 00:20:28.002 "name": "BaseBdev2", 00:20:28.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.002 "is_configured": false, 00:20:28.002 "data_offset": 0, 00:20:28.002 "data_size": 0 00:20:28.002 }, 00:20:28.002 { 00:20:28.002 "name": "BaseBdev3", 00:20:28.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.002 "is_configured": false, 00:20:28.002 "data_offset": 0, 00:20:28.002 "data_size": 0 00:20:28.002 }, 00:20:28.002 { 00:20:28.002 "name": "BaseBdev4", 00:20:28.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.002 "is_configured": false, 00:20:28.002 "data_offset": 0, 00:20:28.002 "data_size": 0 00:20:28.002 } 00:20:28.002 ] 00:20:28.002 }' 00:20:28.002 00:34:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:28.002 00:34:21 -- common/autotest_common.sh@10 -- # set +x 00:20:28.568 00:34:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:29.135 [2024-04-24 00:34:22.660578] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:29.135 BaseBdev2 00:20:29.135 00:34:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:29.135 00:34:22 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:20:29.135 00:34:22 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:29.135 00:34:22 -- common/autotest_common.sh@887 -- # local i 00:20:29.135 00:34:22 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:29.135 00:34:22 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:29.135 00:34:22 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:29.393 00:34:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:29.652 [ 00:20:29.652 { 00:20:29.652 "name": "BaseBdev2", 00:20:29.652 "aliases": [ 00:20:29.652 "5de88770-90ed-4a1c-9d6a-7747f475cc8a" 00:20:29.652 ], 00:20:29.652 "product_name": "Malloc disk", 00:20:29.652 "block_size": 512, 00:20:29.652 "num_blocks": 65536, 00:20:29.652 "uuid": "5de88770-90ed-4a1c-9d6a-7747f475cc8a", 00:20:29.652 "assigned_rate_limits": { 00:20:29.652 "rw_ios_per_sec": 0, 00:20:29.652 "rw_mbytes_per_sec": 0, 00:20:29.652 "r_mbytes_per_sec": 0, 00:20:29.652 "w_mbytes_per_sec": 0 00:20:29.652 }, 00:20:29.652 "claimed": true, 00:20:29.652 "claim_type": "exclusive_write", 00:20:29.652 "zoned": false, 00:20:29.652 "supported_io_types": { 00:20:29.652 "read": true, 00:20:29.652 "write": true, 00:20:29.652 "unmap": true, 00:20:29.652 "write_zeroes": true, 00:20:29.652 "flush": true, 00:20:29.652 "reset": true, 00:20:29.652 "compare": false, 00:20:29.652 "compare_and_write": false, 00:20:29.652 "abort": true, 00:20:29.652 "nvme_admin": false, 00:20:29.652 "nvme_io": false 00:20:29.652 }, 00:20:29.652 "memory_domains": [ 00:20:29.652 { 00:20:29.652 "dma_device_id": "system", 00:20:29.652 "dma_device_type": 1 00:20:29.652 }, 00:20:29.652 { 00:20:29.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.652 "dma_device_type": 2 00:20:29.652 } 00:20:29.652 ], 00:20:29.652 "driver_specific": {} 00:20:29.652 } 00:20:29.652 ] 00:20:29.652 00:34:23 -- common/autotest_common.sh@893 -- # return 0 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.652 00:34:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.929 00:34:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:29.929 "name": "Existed_Raid", 00:20:29.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.929 "strip_size_kb": 64, 00:20:29.929 "state": "configuring", 00:20:29.929 "raid_level": "raid0", 00:20:29.929 "superblock": false, 00:20:29.929 "num_base_bdevs": 4, 00:20:29.929 "num_base_bdevs_discovered": 2, 00:20:29.929 "num_base_bdevs_operational": 4, 00:20:29.929 "base_bdevs_list": [ 00:20:29.929 { 00:20:29.929 "name": "BaseBdev1", 00:20:29.929 "uuid": "c1be07a1-a688-43d7-914c-fdefe93be14e", 00:20:29.929 "is_configured": true, 00:20:29.929 "data_offset": 0, 00:20:29.929 "data_size": 65536 00:20:29.929 }, 00:20:29.929 { 00:20:29.929 "name": "BaseBdev2", 00:20:29.929 "uuid": "5de88770-90ed-4a1c-9d6a-7747f475cc8a", 00:20:29.929 "is_configured": true, 00:20:29.929 "data_offset": 0, 00:20:29.929 "data_size": 65536 00:20:29.929 }, 00:20:29.929 { 00:20:29.929 "name": "BaseBdev3", 00:20:29.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.929 "is_configured": false, 00:20:29.929 "data_offset": 0, 00:20:29.929 "data_size": 0 00:20:29.929 }, 00:20:29.929 { 00:20:29.929 "name": "BaseBdev4", 00:20:29.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.929 "is_configured": false, 00:20:29.929 "data_offset": 0, 00:20:29.929 "data_size": 0 00:20:29.929 } 00:20:29.929 ] 00:20:29.929 }' 00:20:29.929 00:34:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:29.929 00:34:23 -- common/autotest_common.sh@10 -- # set +x 00:20:30.494 00:34:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:30.752 [2024-04-24 00:34:24.486733] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:30.752 BaseBdev3 00:20:30.752 00:34:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:30.752 00:34:24 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:20:30.752 00:34:24 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:30.752 00:34:24 -- common/autotest_common.sh@887 -- # local i 00:20:30.752 00:34:24 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:30.752 00:34:24 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:30.752 00:34:24 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:31.010 00:34:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:31.269 [ 00:20:31.269 { 00:20:31.269 "name": "BaseBdev3", 00:20:31.269 "aliases": [ 00:20:31.269 "22c0c99c-43d2-4cef-a2c5-85cb1cf4376c" 00:20:31.269 ], 00:20:31.269 "product_name": "Malloc disk", 00:20:31.269 "block_size": 512, 00:20:31.269 "num_blocks": 65536, 00:20:31.269 "uuid": "22c0c99c-43d2-4cef-a2c5-85cb1cf4376c", 00:20:31.269 "assigned_rate_limits": { 00:20:31.269 "rw_ios_per_sec": 0, 00:20:31.269 "rw_mbytes_per_sec": 0, 00:20:31.269 "r_mbytes_per_sec": 0, 00:20:31.269 "w_mbytes_per_sec": 0 00:20:31.269 }, 00:20:31.269 "claimed": true, 00:20:31.269 "claim_type": "exclusive_write", 00:20:31.269 "zoned": false, 00:20:31.269 "supported_io_types": { 00:20:31.269 "read": true, 00:20:31.269 "write": true, 00:20:31.269 "unmap": true, 00:20:31.269 "write_zeroes": true, 00:20:31.269 "flush": true, 00:20:31.269 "reset": true, 00:20:31.269 "compare": false, 00:20:31.269 "compare_and_write": false, 00:20:31.269 "abort": true, 00:20:31.269 "nvme_admin": false, 00:20:31.269 "nvme_io": false 00:20:31.269 }, 00:20:31.269 "memory_domains": [ 00:20:31.269 { 00:20:31.269 "dma_device_id": "system", 00:20:31.269 "dma_device_type": 1 00:20:31.269 }, 00:20:31.269 { 00:20:31.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.269 "dma_device_type": 2 00:20:31.269 } 00:20:31.269 ], 00:20:31.269 "driver_specific": {} 00:20:31.269 } 00:20:31.269 ] 00:20:31.269 00:34:24 -- common/autotest_common.sh@893 -- # return 0 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.269 00:34:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.527 00:34:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:31.527 "name": "Existed_Raid", 00:20:31.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.527 "strip_size_kb": 64, 00:20:31.527 "state": "configuring", 00:20:31.527 "raid_level": "raid0", 00:20:31.527 "superblock": false, 00:20:31.527 "num_base_bdevs": 4, 00:20:31.527 "num_base_bdevs_discovered": 3, 00:20:31.527 "num_base_bdevs_operational": 4, 00:20:31.527 "base_bdevs_list": [ 00:20:31.527 { 00:20:31.527 "name": "BaseBdev1", 00:20:31.527 "uuid": "c1be07a1-a688-43d7-914c-fdefe93be14e", 00:20:31.527 "is_configured": true, 00:20:31.527 "data_offset": 0, 00:20:31.527 "data_size": 65536 00:20:31.527 }, 00:20:31.527 { 00:20:31.527 "name": "BaseBdev2", 00:20:31.527 "uuid": "5de88770-90ed-4a1c-9d6a-7747f475cc8a", 00:20:31.527 "is_configured": true, 00:20:31.527 "data_offset": 0, 00:20:31.527 "data_size": 65536 00:20:31.527 }, 00:20:31.527 { 00:20:31.527 "name": "BaseBdev3", 00:20:31.527 "uuid": "22c0c99c-43d2-4cef-a2c5-85cb1cf4376c", 00:20:31.527 "is_configured": true, 00:20:31.527 "data_offset": 0, 00:20:31.527 "data_size": 65536 00:20:31.527 }, 00:20:31.527 { 00:20:31.527 "name": "BaseBdev4", 00:20:31.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.527 "is_configured": false, 00:20:31.527 "data_offset": 0, 00:20:31.527 "data_size": 0 00:20:31.527 } 00:20:31.527 ] 00:20:31.527 }' 00:20:31.527 00:34:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:31.527 00:34:25 -- common/autotest_common.sh@10 -- # set +x 00:20:32.123 00:34:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:32.690 [2024-04-24 00:34:26.184756] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:32.690 [2024-04-24 00:34:26.185002] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:20:32.690 [2024-04-24 00:34:26.185048] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:20:32.690 [2024-04-24 00:34:26.185357] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:32.690 [2024-04-24 00:34:26.185837] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:20:32.690 [2024-04-24 00:34:26.185954] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:20:32.690 [2024-04-24 00:34:26.186318] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.690 BaseBdev4 00:20:32.690 00:34:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:32.690 00:34:26 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:20:32.690 00:34:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:32.690 00:34:26 -- common/autotest_common.sh@887 -- # local i 00:20:32.690 00:34:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:32.690 00:34:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:32.690 00:34:26 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:32.690 00:34:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:32.948 [ 00:20:32.948 { 00:20:32.948 "name": "BaseBdev4", 00:20:32.948 "aliases": [ 00:20:32.948 "97a1fc5f-8bd5-45a3-871b-0b21679fcfe6" 00:20:32.948 ], 00:20:32.948 "product_name": "Malloc disk", 00:20:32.948 "block_size": 512, 00:20:32.948 "num_blocks": 65536, 00:20:32.948 "uuid": "97a1fc5f-8bd5-45a3-871b-0b21679fcfe6", 00:20:32.948 "assigned_rate_limits": { 00:20:32.948 "rw_ios_per_sec": 0, 00:20:32.948 "rw_mbytes_per_sec": 0, 00:20:32.948 "r_mbytes_per_sec": 0, 00:20:32.948 "w_mbytes_per_sec": 0 00:20:32.948 }, 00:20:32.948 "claimed": true, 00:20:32.948 "claim_type": "exclusive_write", 00:20:32.948 "zoned": false, 00:20:32.948 "supported_io_types": { 00:20:32.948 "read": true, 00:20:32.948 "write": true, 00:20:32.948 "unmap": true, 00:20:32.948 "write_zeroes": true, 00:20:32.948 "flush": true, 00:20:32.948 "reset": true, 00:20:32.948 "compare": false, 00:20:32.948 "compare_and_write": false, 00:20:32.948 "abort": true, 00:20:32.948 "nvme_admin": false, 00:20:32.948 "nvme_io": false 00:20:32.948 }, 00:20:32.948 "memory_domains": [ 00:20:32.948 { 00:20:32.948 "dma_device_id": "system", 00:20:32.948 "dma_device_type": 1 00:20:32.948 }, 00:20:32.948 { 00:20:32.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.948 "dma_device_type": 2 00:20:32.948 } 00:20:32.948 ], 00:20:32.948 "driver_specific": {} 00:20:32.948 } 00:20:32.948 ] 00:20:32.948 00:34:26 -- common/autotest_common.sh@893 -- # return 0 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.948 00:34:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.515 00:34:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:33.515 "name": "Existed_Raid", 00:20:33.515 "uuid": "1087fb87-7b6d-41fe-87f9-2d71a8ba75d0", 00:20:33.515 "strip_size_kb": 64, 00:20:33.515 "state": "online", 00:20:33.515 "raid_level": "raid0", 00:20:33.515 "superblock": false, 00:20:33.515 "num_base_bdevs": 4, 00:20:33.515 "num_base_bdevs_discovered": 4, 00:20:33.515 "num_base_bdevs_operational": 4, 00:20:33.515 "base_bdevs_list": [ 00:20:33.515 { 00:20:33.515 "name": "BaseBdev1", 00:20:33.515 "uuid": "c1be07a1-a688-43d7-914c-fdefe93be14e", 00:20:33.515 "is_configured": true, 00:20:33.515 "data_offset": 0, 00:20:33.515 "data_size": 65536 00:20:33.515 }, 00:20:33.515 { 00:20:33.515 "name": "BaseBdev2", 00:20:33.515 "uuid": "5de88770-90ed-4a1c-9d6a-7747f475cc8a", 00:20:33.515 "is_configured": true, 00:20:33.515 "data_offset": 0, 00:20:33.515 "data_size": 65536 00:20:33.515 }, 00:20:33.515 { 00:20:33.515 "name": "BaseBdev3", 00:20:33.515 "uuid": "22c0c99c-43d2-4cef-a2c5-85cb1cf4376c", 00:20:33.515 "is_configured": true, 00:20:33.515 "data_offset": 0, 00:20:33.515 "data_size": 65536 00:20:33.515 }, 00:20:33.515 { 00:20:33.515 "name": "BaseBdev4", 00:20:33.515 "uuid": "97a1fc5f-8bd5-45a3-871b-0b21679fcfe6", 00:20:33.515 "is_configured": true, 00:20:33.515 "data_offset": 0, 00:20:33.515 "data_size": 65536 00:20:33.515 } 00:20:33.515 ] 00:20:33.515 }' 00:20:33.515 00:34:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:33.515 00:34:27 -- common/autotest_common.sh@10 -- # set +x 00:20:34.081 00:34:27 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:34.339 [2024-04-24 00:34:27.945317] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:34.339 [2024-04-24 00:34:27.945527] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.339 [2024-04-24 00:34:27.945687] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.339 00:34:28 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:34.339 00:34:28 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:20:34.339 00:34:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:34.339 00:34:28 -- bdev/bdev_raid.sh@197 -- # return 1 00:20:34.339 00:34:28 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:20:34.339 00:34:28 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:20:34.339 00:34:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:34.339 00:34:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:20:34.339 00:34:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:34.339 00:34:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:34.339 00:34:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:34.339 00:34:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:34.339 00:34:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:34.340 00:34:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:34.340 00:34:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:34.340 00:34:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.340 00:34:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.597 00:34:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:34.597 "name": "Existed_Raid", 00:20:34.597 "uuid": "1087fb87-7b6d-41fe-87f9-2d71a8ba75d0", 00:20:34.597 "strip_size_kb": 64, 00:20:34.597 "state": "offline", 00:20:34.597 "raid_level": "raid0", 00:20:34.597 "superblock": false, 00:20:34.597 "num_base_bdevs": 4, 00:20:34.597 "num_base_bdevs_discovered": 3, 00:20:34.597 "num_base_bdevs_operational": 3, 00:20:34.597 "base_bdevs_list": [ 00:20:34.597 { 00:20:34.597 "name": null, 00:20:34.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.597 "is_configured": false, 00:20:34.597 "data_offset": 0, 00:20:34.598 "data_size": 65536 00:20:34.598 }, 00:20:34.598 { 00:20:34.598 "name": "BaseBdev2", 00:20:34.598 "uuid": "5de88770-90ed-4a1c-9d6a-7747f475cc8a", 00:20:34.598 "is_configured": true, 00:20:34.598 "data_offset": 0, 00:20:34.598 "data_size": 65536 00:20:34.598 }, 00:20:34.598 { 00:20:34.598 "name": "BaseBdev3", 00:20:34.598 "uuid": "22c0c99c-43d2-4cef-a2c5-85cb1cf4376c", 00:20:34.598 "is_configured": true, 00:20:34.598 "data_offset": 0, 00:20:34.598 "data_size": 65536 00:20:34.598 }, 00:20:34.598 { 00:20:34.598 "name": "BaseBdev4", 00:20:34.598 "uuid": "97a1fc5f-8bd5-45a3-871b-0b21679fcfe6", 00:20:34.598 "is_configured": true, 00:20:34.598 "data_offset": 0, 00:20:34.598 "data_size": 65536 00:20:34.598 } 00:20:34.598 ] 00:20:34.598 }' 00:20:34.598 00:34:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:34.598 00:34:28 -- common/autotest_common.sh@10 -- # set +x 00:20:35.532 00:34:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:35.532 00:34:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:35.532 00:34:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.532 00:34:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:35.532 00:34:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:35.532 00:34:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:35.532 00:34:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:35.789 [2024-04-24 00:34:29.531896] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:36.047 00:34:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:36.047 00:34:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:36.047 00:34:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:36.047 00:34:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.304 00:34:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:36.304 00:34:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:36.304 00:34:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:36.566 [2024-04-24 00:34:30.192087] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:36.566 00:34:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:36.566 00:34:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:36.566 00:34:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.566 00:34:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:37.131 00:34:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:37.131 00:34:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:37.131 00:34:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:37.131 [2024-04-24 00:34:30.808799] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:37.131 [2024-04-24 00:34:30.808860] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:20:37.389 00:34:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:37.389 00:34:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:37.389 00:34:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.389 00:34:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:37.648 00:34:31 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:37.648 00:34:31 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:37.648 00:34:31 -- bdev/bdev_raid.sh@287 -- # killprocess 127007 00:20:37.648 00:34:31 -- common/autotest_common.sh@936 -- # '[' -z 127007 ']' 00:20:37.648 00:34:31 -- common/autotest_common.sh@940 -- # kill -0 127007 00:20:37.648 00:34:31 -- common/autotest_common.sh@941 -- # uname 00:20:37.648 00:34:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:37.648 00:34:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127007 00:20:37.648 killing process with pid 127007 00:20:37.648 00:34:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:37.648 00:34:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:37.648 00:34:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127007' 00:20:37.648 00:34:31 -- common/autotest_common.sh@955 -- # kill 127007 00:20:37.648 00:34:31 -- common/autotest_common.sh@960 -- # wait 127007 00:20:37.648 [2024-04-24 00:34:31.274299] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:37.648 [2024-04-24 00:34:31.274437] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:39.020 ************************************ 00:20:39.020 END TEST raid_state_function_test 00:20:39.020 ************************************ 00:20:39.020 00:34:32 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:39.020 00:20:39.020 real 0m16.464s 00:20:39.020 user 0m28.561s 00:20:39.020 sys 0m2.332s 00:20:39.020 00:34:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:39.020 00:34:32 -- common/autotest_common.sh@10 -- # set +x 00:20:39.020 00:34:32 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:20:39.020 00:34:32 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:20:39.020 00:34:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:39.020 00:34:32 -- common/autotest_common.sh@10 -- # set +x 00:20:39.020 ************************************ 00:20:39.020 START TEST raid_state_function_test_sb 00:20:39.021 ************************************ 00:20:39.021 00:34:32 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 4 true 00:20:39.021 00:34:32 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:20:39.021 00:34:32 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:20:39.021 00:34:32 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:20:39.021 00:34:32 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:20:39.278 00:34:32 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:20:39.279 00:34:32 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:39.279 00:34:32 -- bdev/bdev_raid.sh@226 -- # raid_pid=127475 00:20:39.279 00:34:32 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127475' 00:20:39.279 Process raid pid: 127475 00:20:39.279 00:34:32 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127475 /var/tmp/spdk-raid.sock 00:20:39.279 00:34:32 -- common/autotest_common.sh@817 -- # '[' -z 127475 ']' 00:20:39.279 00:34:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:39.279 00:34:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:39.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:39.279 00:34:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:39.279 00:34:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:39.279 00:34:32 -- common/autotest_common.sh@10 -- # set +x 00:20:39.279 [2024-04-24 00:34:32.885621] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:20:39.279 [2024-04-24 00:34:32.885775] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.279 [2024-04-24 00:34:33.056942] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.845 [2024-04-24 00:34:33.331175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.845 [2024-04-24 00:34:33.559544] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:40.103 00:34:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:40.103 00:34:33 -- common/autotest_common.sh@850 -- # return 0 00:20:40.103 00:34:33 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:40.361 [2024-04-24 00:34:33.969456] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:40.361 [2024-04-24 00:34:33.969540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:40.361 [2024-04-24 00:34:33.969551] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:40.361 [2024-04-24 00:34:33.969573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:40.361 [2024-04-24 00:34:33.969581] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:40.361 [2024-04-24 00:34:33.969617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:40.361 [2024-04-24 00:34:33.969624] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:40.361 [2024-04-24 00:34:33.969647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:40.361 00:34:33 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:40.361 00:34:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:40.361 00:34:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:40.361 00:34:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:40.361 00:34:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:40.361 00:34:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:40.361 00:34:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:40.361 00:34:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:40.361 00:34:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:40.361 00:34:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:40.361 00:34:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.361 00:34:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.626 00:34:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.626 "name": "Existed_Raid", 00:20:40.626 "uuid": "3ee3716e-9b6a-4f10-a94e-203fcae70d3f", 00:20:40.626 "strip_size_kb": 64, 00:20:40.626 "state": "configuring", 00:20:40.626 "raid_level": "raid0", 00:20:40.626 "superblock": true, 00:20:40.626 "num_base_bdevs": 4, 00:20:40.626 "num_base_bdevs_discovered": 0, 00:20:40.626 "num_base_bdevs_operational": 4, 00:20:40.626 "base_bdevs_list": [ 00:20:40.626 { 00:20:40.626 "name": "BaseBdev1", 00:20:40.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.626 "is_configured": false, 00:20:40.626 "data_offset": 0, 00:20:40.626 "data_size": 0 00:20:40.626 }, 00:20:40.626 { 00:20:40.626 "name": "BaseBdev2", 00:20:40.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.626 "is_configured": false, 00:20:40.626 "data_offset": 0, 00:20:40.626 "data_size": 0 00:20:40.626 }, 00:20:40.626 { 00:20:40.626 "name": "BaseBdev3", 00:20:40.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.626 "is_configured": false, 00:20:40.626 "data_offset": 0, 00:20:40.626 "data_size": 0 00:20:40.626 }, 00:20:40.626 { 00:20:40.626 "name": "BaseBdev4", 00:20:40.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.626 "is_configured": false, 00:20:40.626 "data_offset": 0, 00:20:40.626 "data_size": 0 00:20:40.626 } 00:20:40.626 ] 00:20:40.626 }' 00:20:40.626 00:34:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.626 00:34:34 -- common/autotest_common.sh@10 -- # set +x 00:20:41.191 00:34:34 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:41.449 [2024-04-24 00:34:34.985584] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:41.449 [2024-04-24 00:34:34.985633] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:20:41.449 00:34:35 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:41.450 [2024-04-24 00:34:35.217632] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:41.450 [2024-04-24 00:34:35.217724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:41.450 [2024-04-24 00:34:35.217735] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:41.450 [2024-04-24 00:34:35.217761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:41.450 [2024-04-24 00:34:35.217770] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:41.450 [2024-04-24 00:34:35.217815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:41.450 [2024-04-24 00:34:35.217822] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:41.450 [2024-04-24 00:34:35.217848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:41.450 00:34:35 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:41.707 [2024-04-24 00:34:35.470822] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:41.707 BaseBdev1 00:20:41.708 00:34:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:41.708 00:34:35 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:20:41.708 00:34:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:41.708 00:34:35 -- common/autotest_common.sh@887 -- # local i 00:20:41.708 00:34:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:41.708 00:34:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:41.708 00:34:35 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:42.273 00:34:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:42.273 [ 00:20:42.273 { 00:20:42.273 "name": "BaseBdev1", 00:20:42.273 "aliases": [ 00:20:42.273 "e879e74a-cad0-4b39-9649-0d36d0fa5518" 00:20:42.273 ], 00:20:42.273 "product_name": "Malloc disk", 00:20:42.274 "block_size": 512, 00:20:42.274 "num_blocks": 65536, 00:20:42.274 "uuid": "e879e74a-cad0-4b39-9649-0d36d0fa5518", 00:20:42.274 "assigned_rate_limits": { 00:20:42.274 "rw_ios_per_sec": 0, 00:20:42.274 "rw_mbytes_per_sec": 0, 00:20:42.274 "r_mbytes_per_sec": 0, 00:20:42.274 "w_mbytes_per_sec": 0 00:20:42.274 }, 00:20:42.274 "claimed": true, 00:20:42.274 "claim_type": "exclusive_write", 00:20:42.274 "zoned": false, 00:20:42.274 "supported_io_types": { 00:20:42.274 "read": true, 00:20:42.274 "write": true, 00:20:42.274 "unmap": true, 00:20:42.274 "write_zeroes": true, 00:20:42.274 "flush": true, 00:20:42.274 "reset": true, 00:20:42.274 "compare": false, 00:20:42.274 "compare_and_write": false, 00:20:42.274 "abort": true, 00:20:42.274 "nvme_admin": false, 00:20:42.274 "nvme_io": false 00:20:42.274 }, 00:20:42.274 "memory_domains": [ 00:20:42.274 { 00:20:42.274 "dma_device_id": "system", 00:20:42.274 "dma_device_type": 1 00:20:42.274 }, 00:20:42.274 { 00:20:42.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.274 "dma_device_type": 2 00:20:42.274 } 00:20:42.274 ], 00:20:42.274 "driver_specific": {} 00:20:42.274 } 00:20:42.274 ] 00:20:42.274 00:34:36 -- common/autotest_common.sh@893 -- # return 0 00:20:42.274 00:34:36 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:42.274 00:34:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:42.274 00:34:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:42.274 00:34:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:42.274 00:34:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:42.274 00:34:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:42.274 00:34:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:42.274 00:34:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:42.274 00:34:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:42.274 00:34:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:42.274 00:34:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.274 00:34:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.840 00:34:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:42.840 "name": "Existed_Raid", 00:20:42.840 "uuid": "1b76129a-d50b-4c26-b70b-a3b3a64b0c84", 00:20:42.840 "strip_size_kb": 64, 00:20:42.840 "state": "configuring", 00:20:42.840 "raid_level": "raid0", 00:20:42.840 "superblock": true, 00:20:42.840 "num_base_bdevs": 4, 00:20:42.840 "num_base_bdevs_discovered": 1, 00:20:42.840 "num_base_bdevs_operational": 4, 00:20:42.840 "base_bdevs_list": [ 00:20:42.840 { 00:20:42.840 "name": "BaseBdev1", 00:20:42.840 "uuid": "e879e74a-cad0-4b39-9649-0d36d0fa5518", 00:20:42.840 "is_configured": true, 00:20:42.840 "data_offset": 2048, 00:20:42.840 "data_size": 63488 00:20:42.840 }, 00:20:42.840 { 00:20:42.840 "name": "BaseBdev2", 00:20:42.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.840 "is_configured": false, 00:20:42.840 "data_offset": 0, 00:20:42.840 "data_size": 0 00:20:42.840 }, 00:20:42.840 { 00:20:42.840 "name": "BaseBdev3", 00:20:42.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.840 "is_configured": false, 00:20:42.840 "data_offset": 0, 00:20:42.840 "data_size": 0 00:20:42.840 }, 00:20:42.840 { 00:20:42.840 "name": "BaseBdev4", 00:20:42.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.840 "is_configured": false, 00:20:42.840 "data_offset": 0, 00:20:42.840 "data_size": 0 00:20:42.840 } 00:20:42.840 ] 00:20:42.840 }' 00:20:42.840 00:34:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:42.840 00:34:36 -- common/autotest_common.sh@10 -- # set +x 00:20:43.406 00:34:36 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:43.664 [2024-04-24 00:34:37.231332] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:43.664 [2024-04-24 00:34:37.231583] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:20:43.664 00:34:37 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:20:43.664 00:34:37 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:43.922 00:34:37 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:44.180 BaseBdev1 00:20:44.180 00:34:37 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:20:44.180 00:34:37 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:20:44.180 00:34:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:44.180 00:34:37 -- common/autotest_common.sh@887 -- # local i 00:20:44.180 00:34:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:44.180 00:34:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:44.180 00:34:37 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.437 00:34:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:44.695 [ 00:20:44.695 { 00:20:44.695 "name": "BaseBdev1", 00:20:44.695 "aliases": [ 00:20:44.695 "677a259e-a9fd-4ee6-bc14-8af79463adea" 00:20:44.695 ], 00:20:44.695 "product_name": "Malloc disk", 00:20:44.695 "block_size": 512, 00:20:44.695 "num_blocks": 65536, 00:20:44.695 "uuid": "677a259e-a9fd-4ee6-bc14-8af79463adea", 00:20:44.695 "assigned_rate_limits": { 00:20:44.695 "rw_ios_per_sec": 0, 00:20:44.695 "rw_mbytes_per_sec": 0, 00:20:44.695 "r_mbytes_per_sec": 0, 00:20:44.695 "w_mbytes_per_sec": 0 00:20:44.695 }, 00:20:44.695 "claimed": false, 00:20:44.695 "zoned": false, 00:20:44.695 "supported_io_types": { 00:20:44.695 "read": true, 00:20:44.695 "write": true, 00:20:44.695 "unmap": true, 00:20:44.695 "write_zeroes": true, 00:20:44.696 "flush": true, 00:20:44.696 "reset": true, 00:20:44.696 "compare": false, 00:20:44.696 "compare_and_write": false, 00:20:44.696 "abort": true, 00:20:44.696 "nvme_admin": false, 00:20:44.696 "nvme_io": false 00:20:44.696 }, 00:20:44.696 "memory_domains": [ 00:20:44.696 { 00:20:44.696 "dma_device_id": "system", 00:20:44.696 "dma_device_type": 1 00:20:44.696 }, 00:20:44.696 { 00:20:44.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.696 "dma_device_type": 2 00:20:44.696 } 00:20:44.696 ], 00:20:44.696 "driver_specific": {} 00:20:44.696 } 00:20:44.696 ] 00:20:44.696 00:34:38 -- common/autotest_common.sh@893 -- # return 0 00:20:44.696 00:34:38 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:44.954 [2024-04-24 00:34:38.658464] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:44.954 [2024-04-24 00:34:38.660694] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:44.954 [2024-04-24 00:34:38.660896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:44.954 [2024-04-24 00:34:38.660983] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:44.954 [2024-04-24 00:34:38.661044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:44.954 [2024-04-24 00:34:38.661127] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:44.955 [2024-04-24 00:34:38.661178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.955 00:34:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.212 00:34:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:45.212 "name": "Existed_Raid", 00:20:45.212 "uuid": "08daaaad-fa0d-443a-b939-9abfc91ace8e", 00:20:45.212 "strip_size_kb": 64, 00:20:45.212 "state": "configuring", 00:20:45.212 "raid_level": "raid0", 00:20:45.212 "superblock": true, 00:20:45.212 "num_base_bdevs": 4, 00:20:45.212 "num_base_bdevs_discovered": 1, 00:20:45.212 "num_base_bdevs_operational": 4, 00:20:45.212 "base_bdevs_list": [ 00:20:45.212 { 00:20:45.212 "name": "BaseBdev1", 00:20:45.212 "uuid": "677a259e-a9fd-4ee6-bc14-8af79463adea", 00:20:45.212 "is_configured": true, 00:20:45.212 "data_offset": 2048, 00:20:45.212 "data_size": 63488 00:20:45.212 }, 00:20:45.212 { 00:20:45.212 "name": "BaseBdev2", 00:20:45.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.213 "is_configured": false, 00:20:45.213 "data_offset": 0, 00:20:45.213 "data_size": 0 00:20:45.213 }, 00:20:45.213 { 00:20:45.213 "name": "BaseBdev3", 00:20:45.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.213 "is_configured": false, 00:20:45.213 "data_offset": 0, 00:20:45.213 "data_size": 0 00:20:45.213 }, 00:20:45.213 { 00:20:45.213 "name": "BaseBdev4", 00:20:45.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.213 "is_configured": false, 00:20:45.213 "data_offset": 0, 00:20:45.213 "data_size": 0 00:20:45.213 } 00:20:45.213 ] 00:20:45.213 }' 00:20:45.213 00:34:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:45.213 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:20:45.779 00:34:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:46.037 [2024-04-24 00:34:39.721405] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:46.037 BaseBdev2 00:20:46.037 00:34:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:46.037 00:34:39 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:20:46.037 00:34:39 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:46.037 00:34:39 -- common/autotest_common.sh@887 -- # local i 00:20:46.037 00:34:39 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:46.037 00:34:39 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:46.037 00:34:39 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:46.295 00:34:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:46.554 [ 00:20:46.554 { 00:20:46.554 "name": "BaseBdev2", 00:20:46.554 "aliases": [ 00:20:46.554 "bf79c471-1017-4021-ba6c-abde19224291" 00:20:46.554 ], 00:20:46.554 "product_name": "Malloc disk", 00:20:46.554 "block_size": 512, 00:20:46.554 "num_blocks": 65536, 00:20:46.554 "uuid": "bf79c471-1017-4021-ba6c-abde19224291", 00:20:46.554 "assigned_rate_limits": { 00:20:46.554 "rw_ios_per_sec": 0, 00:20:46.554 "rw_mbytes_per_sec": 0, 00:20:46.554 "r_mbytes_per_sec": 0, 00:20:46.554 "w_mbytes_per_sec": 0 00:20:46.554 }, 00:20:46.554 "claimed": true, 00:20:46.554 "claim_type": "exclusive_write", 00:20:46.554 "zoned": false, 00:20:46.554 "supported_io_types": { 00:20:46.554 "read": true, 00:20:46.554 "write": true, 00:20:46.554 "unmap": true, 00:20:46.554 "write_zeroes": true, 00:20:46.554 "flush": true, 00:20:46.554 "reset": true, 00:20:46.554 "compare": false, 00:20:46.554 "compare_and_write": false, 00:20:46.554 "abort": true, 00:20:46.554 "nvme_admin": false, 00:20:46.554 "nvme_io": false 00:20:46.554 }, 00:20:46.554 "memory_domains": [ 00:20:46.554 { 00:20:46.554 "dma_device_id": "system", 00:20:46.554 "dma_device_type": 1 00:20:46.554 }, 00:20:46.554 { 00:20:46.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.554 "dma_device_type": 2 00:20:46.554 } 00:20:46.554 ], 00:20:46.554 "driver_specific": {} 00:20:46.554 } 00:20:46.554 ] 00:20:46.554 00:34:40 -- common/autotest_common.sh@893 -- # return 0 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.554 00:34:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.811 00:34:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:46.811 "name": "Existed_Raid", 00:20:46.811 "uuid": "08daaaad-fa0d-443a-b939-9abfc91ace8e", 00:20:46.811 "strip_size_kb": 64, 00:20:46.811 "state": "configuring", 00:20:46.811 "raid_level": "raid0", 00:20:46.811 "superblock": true, 00:20:46.811 "num_base_bdevs": 4, 00:20:46.811 "num_base_bdevs_discovered": 2, 00:20:46.811 "num_base_bdevs_operational": 4, 00:20:46.811 "base_bdevs_list": [ 00:20:46.811 { 00:20:46.811 "name": "BaseBdev1", 00:20:46.811 "uuid": "677a259e-a9fd-4ee6-bc14-8af79463adea", 00:20:46.811 "is_configured": true, 00:20:46.811 "data_offset": 2048, 00:20:46.811 "data_size": 63488 00:20:46.811 }, 00:20:46.811 { 00:20:46.811 "name": "BaseBdev2", 00:20:46.811 "uuid": "bf79c471-1017-4021-ba6c-abde19224291", 00:20:46.811 "is_configured": true, 00:20:46.811 "data_offset": 2048, 00:20:46.811 "data_size": 63488 00:20:46.811 }, 00:20:46.811 { 00:20:46.811 "name": "BaseBdev3", 00:20:46.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.811 "is_configured": false, 00:20:46.811 "data_offset": 0, 00:20:46.811 "data_size": 0 00:20:46.811 }, 00:20:46.811 { 00:20:46.811 "name": "BaseBdev4", 00:20:46.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.811 "is_configured": false, 00:20:46.811 "data_offset": 0, 00:20:46.811 "data_size": 0 00:20:46.811 } 00:20:46.811 ] 00:20:46.811 }' 00:20:46.811 00:34:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:46.811 00:34:40 -- common/autotest_common.sh@10 -- # set +x 00:20:47.376 00:34:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:47.634 [2024-04-24 00:34:41.346609] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:47.634 BaseBdev3 00:20:47.634 00:34:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:47.634 00:34:41 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:20:47.634 00:34:41 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:47.634 00:34:41 -- common/autotest_common.sh@887 -- # local i 00:20:47.634 00:34:41 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:47.634 00:34:41 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:47.634 00:34:41 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:47.894 00:34:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:48.155 [ 00:20:48.155 { 00:20:48.155 "name": "BaseBdev3", 00:20:48.155 "aliases": [ 00:20:48.155 "5edf4afb-f05e-417b-b52f-c382ac408fc9" 00:20:48.155 ], 00:20:48.155 "product_name": "Malloc disk", 00:20:48.155 "block_size": 512, 00:20:48.155 "num_blocks": 65536, 00:20:48.155 "uuid": "5edf4afb-f05e-417b-b52f-c382ac408fc9", 00:20:48.155 "assigned_rate_limits": { 00:20:48.155 "rw_ios_per_sec": 0, 00:20:48.155 "rw_mbytes_per_sec": 0, 00:20:48.155 "r_mbytes_per_sec": 0, 00:20:48.155 "w_mbytes_per_sec": 0 00:20:48.155 }, 00:20:48.155 "claimed": true, 00:20:48.155 "claim_type": "exclusive_write", 00:20:48.155 "zoned": false, 00:20:48.155 "supported_io_types": { 00:20:48.155 "read": true, 00:20:48.155 "write": true, 00:20:48.155 "unmap": true, 00:20:48.155 "write_zeroes": true, 00:20:48.155 "flush": true, 00:20:48.155 "reset": true, 00:20:48.155 "compare": false, 00:20:48.155 "compare_and_write": false, 00:20:48.155 "abort": true, 00:20:48.155 "nvme_admin": false, 00:20:48.155 "nvme_io": false 00:20:48.155 }, 00:20:48.155 "memory_domains": [ 00:20:48.155 { 00:20:48.155 "dma_device_id": "system", 00:20:48.155 "dma_device_type": 1 00:20:48.155 }, 00:20:48.155 { 00:20:48.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.155 "dma_device_type": 2 00:20:48.155 } 00:20:48.155 ], 00:20:48.155 "driver_specific": {} 00:20:48.155 } 00:20:48.155 ] 00:20:48.413 00:34:41 -- common/autotest_common.sh@893 -- # return 0 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.413 00:34:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.413 00:34:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:48.413 "name": "Existed_Raid", 00:20:48.413 "uuid": "08daaaad-fa0d-443a-b939-9abfc91ace8e", 00:20:48.413 "strip_size_kb": 64, 00:20:48.413 "state": "configuring", 00:20:48.413 "raid_level": "raid0", 00:20:48.413 "superblock": true, 00:20:48.413 "num_base_bdevs": 4, 00:20:48.413 "num_base_bdevs_discovered": 3, 00:20:48.413 "num_base_bdevs_operational": 4, 00:20:48.413 "base_bdevs_list": [ 00:20:48.413 { 00:20:48.413 "name": "BaseBdev1", 00:20:48.413 "uuid": "677a259e-a9fd-4ee6-bc14-8af79463adea", 00:20:48.413 "is_configured": true, 00:20:48.413 "data_offset": 2048, 00:20:48.413 "data_size": 63488 00:20:48.413 }, 00:20:48.413 { 00:20:48.413 "name": "BaseBdev2", 00:20:48.413 "uuid": "bf79c471-1017-4021-ba6c-abde19224291", 00:20:48.413 "is_configured": true, 00:20:48.413 "data_offset": 2048, 00:20:48.413 "data_size": 63488 00:20:48.413 }, 00:20:48.413 { 00:20:48.413 "name": "BaseBdev3", 00:20:48.413 "uuid": "5edf4afb-f05e-417b-b52f-c382ac408fc9", 00:20:48.413 "is_configured": true, 00:20:48.413 "data_offset": 2048, 00:20:48.413 "data_size": 63488 00:20:48.413 }, 00:20:48.413 { 00:20:48.413 "name": "BaseBdev4", 00:20:48.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.413 "is_configured": false, 00:20:48.413 "data_offset": 0, 00:20:48.413 "data_size": 0 00:20:48.413 } 00:20:48.413 ] 00:20:48.413 }' 00:20:48.413 00:34:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:48.413 00:34:42 -- common/autotest_common.sh@10 -- # set +x 00:20:49.349 00:34:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:49.607 [2024-04-24 00:34:43.157063] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:49.607 [2024-04-24 00:34:43.157578] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:20:49.607 [2024-04-24 00:34:43.157698] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:49.607 [2024-04-24 00:34:43.157933] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:49.607 [2024-04-24 00:34:43.158386] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:20:49.607 BaseBdev4 00:20:49.607 [2024-04-24 00:34:43.158518] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:20:49.607 [2024-04-24 00:34:43.158764] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.607 00:34:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:49.607 00:34:43 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:20:49.607 00:34:43 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:49.607 00:34:43 -- common/autotest_common.sh@887 -- # local i 00:20:49.607 00:34:43 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:49.607 00:34:43 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:49.607 00:34:43 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:49.866 00:34:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:50.176 [ 00:20:50.176 { 00:20:50.176 "name": "BaseBdev4", 00:20:50.176 "aliases": [ 00:20:50.176 "cfc81ae1-59e9-432d-9ad8-546a4362e00a" 00:20:50.176 ], 00:20:50.176 "product_name": "Malloc disk", 00:20:50.176 "block_size": 512, 00:20:50.176 "num_blocks": 65536, 00:20:50.176 "uuid": "cfc81ae1-59e9-432d-9ad8-546a4362e00a", 00:20:50.176 "assigned_rate_limits": { 00:20:50.176 "rw_ios_per_sec": 0, 00:20:50.176 "rw_mbytes_per_sec": 0, 00:20:50.176 "r_mbytes_per_sec": 0, 00:20:50.176 "w_mbytes_per_sec": 0 00:20:50.176 }, 00:20:50.176 "claimed": true, 00:20:50.176 "claim_type": "exclusive_write", 00:20:50.176 "zoned": false, 00:20:50.176 "supported_io_types": { 00:20:50.176 "read": true, 00:20:50.176 "write": true, 00:20:50.176 "unmap": true, 00:20:50.176 "write_zeroes": true, 00:20:50.176 "flush": true, 00:20:50.176 "reset": true, 00:20:50.176 "compare": false, 00:20:50.176 "compare_and_write": false, 00:20:50.176 "abort": true, 00:20:50.176 "nvme_admin": false, 00:20:50.176 "nvme_io": false 00:20:50.176 }, 00:20:50.176 "memory_domains": [ 00:20:50.176 { 00:20:50.176 "dma_device_id": "system", 00:20:50.176 "dma_device_type": 1 00:20:50.176 }, 00:20:50.176 { 00:20:50.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.176 "dma_device_type": 2 00:20:50.176 } 00:20:50.176 ], 00:20:50.176 "driver_specific": {} 00:20:50.176 } 00:20:50.176 ] 00:20:50.176 00:34:43 -- common/autotest_common.sh@893 -- # return 0 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.176 00:34:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.435 00:34:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:50.435 "name": "Existed_Raid", 00:20:50.435 "uuid": "08daaaad-fa0d-443a-b939-9abfc91ace8e", 00:20:50.435 "strip_size_kb": 64, 00:20:50.435 "state": "online", 00:20:50.435 "raid_level": "raid0", 00:20:50.435 "superblock": true, 00:20:50.435 "num_base_bdevs": 4, 00:20:50.435 "num_base_bdevs_discovered": 4, 00:20:50.435 "num_base_bdevs_operational": 4, 00:20:50.435 "base_bdevs_list": [ 00:20:50.435 { 00:20:50.435 "name": "BaseBdev1", 00:20:50.435 "uuid": "677a259e-a9fd-4ee6-bc14-8af79463adea", 00:20:50.435 "is_configured": true, 00:20:50.435 "data_offset": 2048, 00:20:50.435 "data_size": 63488 00:20:50.435 }, 00:20:50.435 { 00:20:50.435 "name": "BaseBdev2", 00:20:50.435 "uuid": "bf79c471-1017-4021-ba6c-abde19224291", 00:20:50.435 "is_configured": true, 00:20:50.435 "data_offset": 2048, 00:20:50.435 "data_size": 63488 00:20:50.435 }, 00:20:50.435 { 00:20:50.435 "name": "BaseBdev3", 00:20:50.435 "uuid": "5edf4afb-f05e-417b-b52f-c382ac408fc9", 00:20:50.435 "is_configured": true, 00:20:50.435 "data_offset": 2048, 00:20:50.435 "data_size": 63488 00:20:50.435 }, 00:20:50.435 { 00:20:50.435 "name": "BaseBdev4", 00:20:50.435 "uuid": "cfc81ae1-59e9-432d-9ad8-546a4362e00a", 00:20:50.435 "is_configured": true, 00:20:50.435 "data_offset": 2048, 00:20:50.435 "data_size": 63488 00:20:50.435 } 00:20:50.435 ] 00:20:50.435 }' 00:20:50.435 00:34:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:50.435 00:34:44 -- common/autotest_common.sh@10 -- # set +x 00:20:51.002 00:34:44 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:51.260 [2024-04-24 00:34:44.883480] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:51.260 [2024-04-24 00:34:44.883715] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:51.260 [2024-04-24 00:34:44.883859] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@197 -- # return 1 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.260 00:34:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.825 00:34:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:51.825 "name": "Existed_Raid", 00:20:51.825 "uuid": "08daaaad-fa0d-443a-b939-9abfc91ace8e", 00:20:51.825 "strip_size_kb": 64, 00:20:51.825 "state": "offline", 00:20:51.825 "raid_level": "raid0", 00:20:51.825 "superblock": true, 00:20:51.826 "num_base_bdevs": 4, 00:20:51.826 "num_base_bdevs_discovered": 3, 00:20:51.826 "num_base_bdevs_operational": 3, 00:20:51.826 "base_bdevs_list": [ 00:20:51.826 { 00:20:51.826 "name": null, 00:20:51.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.826 "is_configured": false, 00:20:51.826 "data_offset": 2048, 00:20:51.826 "data_size": 63488 00:20:51.826 }, 00:20:51.826 { 00:20:51.826 "name": "BaseBdev2", 00:20:51.826 "uuid": "bf79c471-1017-4021-ba6c-abde19224291", 00:20:51.826 "is_configured": true, 00:20:51.826 "data_offset": 2048, 00:20:51.826 "data_size": 63488 00:20:51.826 }, 00:20:51.826 { 00:20:51.826 "name": "BaseBdev3", 00:20:51.826 "uuid": "5edf4afb-f05e-417b-b52f-c382ac408fc9", 00:20:51.826 "is_configured": true, 00:20:51.826 "data_offset": 2048, 00:20:51.826 "data_size": 63488 00:20:51.826 }, 00:20:51.826 { 00:20:51.826 "name": "BaseBdev4", 00:20:51.826 "uuid": "cfc81ae1-59e9-432d-9ad8-546a4362e00a", 00:20:51.826 "is_configured": true, 00:20:51.826 "data_offset": 2048, 00:20:51.826 "data_size": 63488 00:20:51.826 } 00:20:51.826 ] 00:20:51.826 }' 00:20:51.826 00:34:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:51.826 00:34:45 -- common/autotest_common.sh@10 -- # set +x 00:20:52.405 00:34:45 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:52.405 00:34:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:52.405 00:34:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:52.405 00:34:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.667 00:34:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:52.667 00:34:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:52.667 00:34:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:52.925 [2024-04-24 00:34:46.563660] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:52.925 00:34:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:52.925 00:34:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:52.925 00:34:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.925 00:34:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:53.185 00:34:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:53.185 00:34:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:53.185 00:34:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:53.751 [2024-04-24 00:34:47.241353] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:53.751 00:34:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:53.751 00:34:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:53.751 00:34:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.751 00:34:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:54.009 00:34:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:54.009 00:34:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:54.009 00:34:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:54.267 [2024-04-24 00:34:47.916152] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:54.267 [2024-04-24 00:34:47.916421] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:20:54.267 00:34:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:54.267 00:34:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:54.267 00:34:48 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.267 00:34:48 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:54.581 00:34:48 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:54.581 00:34:48 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:54.581 00:34:48 -- bdev/bdev_raid.sh@287 -- # killprocess 127475 00:20:54.581 00:34:48 -- common/autotest_common.sh@936 -- # '[' -z 127475 ']' 00:20:54.581 00:34:48 -- common/autotest_common.sh@940 -- # kill -0 127475 00:20:54.581 00:34:48 -- common/autotest_common.sh@941 -- # uname 00:20:54.581 00:34:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:54.581 00:34:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127475 00:20:54.841 00:34:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:54.841 00:34:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:54.841 00:34:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127475' 00:20:54.841 killing process with pid 127475 00:20:54.841 00:34:48 -- common/autotest_common.sh@955 -- # kill 127475 00:20:54.841 [2024-04-24 00:34:48.357623] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:54.841 00:34:48 -- common/autotest_common.sh@960 -- # wait 127475 00:20:54.841 [2024-04-24 00:34:48.357865] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:56.216 ************************************ 00:20:56.216 END TEST raid_state_function_test_sb 00:20:56.216 ************************************ 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:56.216 00:20:56.216 real 0m16.992s 00:20:56.216 user 0m29.363s 00:20:56.216 sys 0m2.378s 00:20:56.216 00:34:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:56.216 00:34:49 -- common/autotest_common.sh@10 -- # set +x 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:20:56.216 00:34:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:20:56.216 00:34:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:56.216 00:34:49 -- common/autotest_common.sh@10 -- # set +x 00:20:56.216 ************************************ 00:20:56.216 START TEST raid_superblock_test 00:20:56.216 ************************************ 00:20:56.216 00:34:49 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid0 4 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@357 -- # raid_pid=127951 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:56.216 00:34:49 -- bdev/bdev_raid.sh@358 -- # waitforlisten 127951 /var/tmp/spdk-raid.sock 00:20:56.216 00:34:49 -- common/autotest_common.sh@817 -- # '[' -z 127951 ']' 00:20:56.216 00:34:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:56.216 00:34:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:56.216 00:34:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:56.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:56.216 00:34:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:56.216 00:34:49 -- common/autotest_common.sh@10 -- # set +x 00:20:56.216 [2024-04-24 00:34:49.974317] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:20:56.216 [2024-04-24 00:34:49.974831] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127951 ] 00:20:56.474 [2024-04-24 00:34:50.157638] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.732 [2024-04-24 00:34:50.375305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.989 [2024-04-24 00:34:50.605951] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:57.247 00:34:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:57.247 00:34:51 -- common/autotest_common.sh@850 -- # return 0 00:20:57.247 00:34:51 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:57.247 00:34:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:57.247 00:34:51 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:57.247 00:34:51 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:57.247 00:34:51 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:57.247 00:34:51 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:57.247 00:34:51 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:57.247 00:34:51 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:57.247 00:34:51 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:57.505 malloc1 00:20:57.505 00:34:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:57.763 [2024-04-24 00:34:51.488355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:57.763 [2024-04-24 00:34:51.488649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.763 [2024-04-24 00:34:51.488722] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:57.763 [2024-04-24 00:34:51.488940] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.763 [2024-04-24 00:34:51.491663] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.763 [2024-04-24 00:34:51.491841] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:57.763 pt1 00:20:57.763 00:34:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:57.763 00:34:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:57.763 00:34:51 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:57.763 00:34:51 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:57.763 00:34:51 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:57.763 00:34:51 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:57.763 00:34:51 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:57.763 00:34:51 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:57.763 00:34:51 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:58.019 malloc2 00:20:58.019 00:34:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:58.277 [2024-04-24 00:34:52.033961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:58.277 [2024-04-24 00:34:52.034253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.277 [2024-04-24 00:34:52.034401] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:58.277 [2024-04-24 00:34:52.034549] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.277 [2024-04-24 00:34:52.037223] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.277 [2024-04-24 00:34:52.037405] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:58.277 pt2 00:20:58.277 00:34:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:58.277 00:34:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:58.277 00:34:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:58.277 00:34:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:58.277 00:34:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:58.277 00:34:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:58.277 00:34:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:58.277 00:34:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:58.277 00:34:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:58.534 malloc3 00:20:58.793 00:34:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:59.051 [2024-04-24 00:34:52.614793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:59.051 [2024-04-24 00:34:52.615102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.051 [2024-04-24 00:34:52.615260] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:59.051 [2024-04-24 00:34:52.615392] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.051 [2024-04-24 00:34:52.618026] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.051 [2024-04-24 00:34:52.618210] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:59.051 pt3 00:20:59.051 00:34:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:59.051 00:34:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:59.051 00:34:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:20:59.051 00:34:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:20:59.051 00:34:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:59.051 00:34:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:59.051 00:34:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:59.051 00:34:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:59.051 00:34:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:59.356 malloc4 00:20:59.356 00:34:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:59.356 [2024-04-24 00:34:53.088415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:59.356 [2024-04-24 00:34:53.088744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.356 [2024-04-24 00:34:53.088818] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:59.356 [2024-04-24 00:34:53.088975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.356 [2024-04-24 00:34:53.091626] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.357 [2024-04-24 00:34:53.091811] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:59.357 pt4 00:20:59.357 00:34:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:59.357 00:34:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:59.357 00:34:53 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:59.630 [2024-04-24 00:34:53.300587] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:59.630 [2024-04-24 00:34:53.302997] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:59.630 [2024-04-24 00:34:53.303250] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:59.630 [2024-04-24 00:34:53.303472] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:59.630 [2024-04-24 00:34:53.303797] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:20:59.630 [2024-04-24 00:34:53.303906] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:59.630 [2024-04-24 00:34:53.304085] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:59.630 [2024-04-24 00:34:53.304483] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:20:59.630 [2024-04-24 00:34:53.304589] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:20:59.630 [2024-04-24 00:34:53.304879] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.630 00:34:53 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:20:59.630 00:34:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:59.630 00:34:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:59.631 00:34:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:20:59.631 00:34:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:59.631 00:34:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:59.631 00:34:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:59.631 00:34:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:59.631 00:34:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:59.631 00:34:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:59.631 00:34:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.631 00:34:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.890 00:34:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.890 "name": "raid_bdev1", 00:20:59.890 "uuid": "287b7ba0-608c-48bd-aea6-7a02be349792", 00:20:59.890 "strip_size_kb": 64, 00:20:59.890 "state": "online", 00:20:59.890 "raid_level": "raid0", 00:20:59.890 "superblock": true, 00:20:59.890 "num_base_bdevs": 4, 00:20:59.890 "num_base_bdevs_discovered": 4, 00:20:59.890 "num_base_bdevs_operational": 4, 00:20:59.890 "base_bdevs_list": [ 00:20:59.890 { 00:20:59.890 "name": "pt1", 00:20:59.890 "uuid": "b6c97b05-1321-5e83-b219-08ad4a708930", 00:20:59.890 "is_configured": true, 00:20:59.890 "data_offset": 2048, 00:20:59.890 "data_size": 63488 00:20:59.890 }, 00:20:59.890 { 00:20:59.890 "name": "pt2", 00:20:59.890 "uuid": "7e6319ab-7b02-574a-af3e-59c6d0820d48", 00:20:59.890 "is_configured": true, 00:20:59.890 "data_offset": 2048, 00:20:59.890 "data_size": 63488 00:20:59.890 }, 00:20:59.890 { 00:20:59.890 "name": "pt3", 00:20:59.890 "uuid": "37153d95-362a-51de-bd5b-040630a85aab", 00:20:59.891 "is_configured": true, 00:20:59.891 "data_offset": 2048, 00:20:59.891 "data_size": 63488 00:20:59.891 }, 00:20:59.891 { 00:20:59.891 "name": "pt4", 00:20:59.891 "uuid": "a36f157e-c6a9-5cfb-8761-09a0d698b9e2", 00:20:59.891 "is_configured": true, 00:20:59.891 "data_offset": 2048, 00:20:59.891 "data_size": 63488 00:20:59.891 } 00:20:59.891 ] 00:20:59.891 }' 00:20:59.891 00:34:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.891 00:34:53 -- common/autotest_common.sh@10 -- # set +x 00:21:00.457 00:34:54 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:00.457 00:34:54 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:21:00.716 [2024-04-24 00:34:54.273297] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:00.716 00:34:54 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=287b7ba0-608c-48bd-aea6-7a02be349792 00:21:00.716 00:34:54 -- bdev/bdev_raid.sh@380 -- # '[' -z 287b7ba0-608c-48bd-aea6-7a02be349792 ']' 00:21:00.716 00:34:54 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:00.716 [2024-04-24 00:34:54.501056] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:00.716 [2024-04-24 00:34:54.501242] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:00.716 [2024-04-24 00:34:54.501412] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.716 [2024-04-24 00:34:54.501564] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:00.716 [2024-04-24 00:34:54.501645] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:21:00.973 00:34:54 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.973 00:34:54 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:21:00.973 00:34:54 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:21:00.973 00:34:54 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:21:00.973 00:34:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:00.973 00:34:54 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:01.231 00:34:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:01.231 00:34:54 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:01.489 00:34:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:01.489 00:34:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:01.765 00:34:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:01.765 00:34:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:02.023 00:34:55 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:02.023 00:34:55 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:02.281 00:34:55 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:21:02.281 00:34:55 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:02.281 00:34:55 -- common/autotest_common.sh@638 -- # local es=0 00:21:02.281 00:34:55 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:02.281 00:34:55 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:02.281 00:34:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:02.281 00:34:55 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:02.281 00:34:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:02.281 00:34:55 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:02.281 00:34:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:02.281 00:34:55 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:02.281 00:34:55 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:02.281 00:34:55 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:02.538 [2024-04-24 00:34:56.109340] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:02.538 [2024-04-24 00:34:56.111748] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:02.538 [2024-04-24 00:34:56.111944] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:02.538 [2024-04-24 00:34:56.112023] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:02.538 [2024-04-24 00:34:56.112197] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:21:02.538 [2024-04-24 00:34:56.112364] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:21:02.538 [2024-04-24 00:34:56.112431] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:21:02.539 [2024-04-24 00:34:56.112650] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:21:02.539 [2024-04-24 00:34:56.112773] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.539 [2024-04-24 00:34:56.112889] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:21:02.539 request: 00:21:02.539 { 00:21:02.539 "name": "raid_bdev1", 00:21:02.539 "raid_level": "raid0", 00:21:02.539 "base_bdevs": [ 00:21:02.539 "malloc1", 00:21:02.539 "malloc2", 00:21:02.539 "malloc3", 00:21:02.539 "malloc4" 00:21:02.539 ], 00:21:02.539 "superblock": false, 00:21:02.539 "strip_size_kb": 64, 00:21:02.539 "method": "bdev_raid_create", 00:21:02.539 "req_id": 1 00:21:02.539 } 00:21:02.539 Got JSON-RPC error response 00:21:02.539 response: 00:21:02.539 { 00:21:02.539 "code": -17, 00:21:02.539 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:02.539 } 00:21:02.539 00:34:56 -- common/autotest_common.sh@641 -- # es=1 00:21:02.539 00:34:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:02.539 00:34:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:02.539 00:34:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:02.539 00:34:56 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:21:02.539 00:34:56 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.796 00:34:56 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:21:02.796 00:34:56 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:21:02.797 00:34:56 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:03.054 [2024-04-24 00:34:56.597482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:03.054 [2024-04-24 00:34:56.597780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.054 [2024-04-24 00:34:56.597922] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:03.054 [2024-04-24 00:34:56.598030] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.054 [2024-04-24 00:34:56.600671] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.054 [2024-04-24 00:34:56.600896] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:03.054 [2024-04-24 00:34:56.601139] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:21:03.054 [2024-04-24 00:34:56.601307] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:03.054 pt1 00:21:03.054 00:34:56 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:03.054 00:34:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:03.054 00:34:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:03.054 00:34:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:03.054 00:34:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:03.054 00:34:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:03.054 00:34:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:03.054 00:34:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:03.054 00:34:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:03.054 00:34:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:03.054 00:34:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.054 00:34:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.312 00:34:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:03.312 "name": "raid_bdev1", 00:21:03.312 "uuid": "287b7ba0-608c-48bd-aea6-7a02be349792", 00:21:03.312 "strip_size_kb": 64, 00:21:03.312 "state": "configuring", 00:21:03.312 "raid_level": "raid0", 00:21:03.312 "superblock": true, 00:21:03.312 "num_base_bdevs": 4, 00:21:03.312 "num_base_bdevs_discovered": 1, 00:21:03.312 "num_base_bdevs_operational": 4, 00:21:03.312 "base_bdevs_list": [ 00:21:03.312 { 00:21:03.312 "name": "pt1", 00:21:03.312 "uuid": "b6c97b05-1321-5e83-b219-08ad4a708930", 00:21:03.312 "is_configured": true, 00:21:03.312 "data_offset": 2048, 00:21:03.312 "data_size": 63488 00:21:03.312 }, 00:21:03.312 { 00:21:03.312 "name": null, 00:21:03.312 "uuid": "7e6319ab-7b02-574a-af3e-59c6d0820d48", 00:21:03.312 "is_configured": false, 00:21:03.312 "data_offset": 2048, 00:21:03.312 "data_size": 63488 00:21:03.312 }, 00:21:03.312 { 00:21:03.312 "name": null, 00:21:03.312 "uuid": "37153d95-362a-51de-bd5b-040630a85aab", 00:21:03.312 "is_configured": false, 00:21:03.312 "data_offset": 2048, 00:21:03.312 "data_size": 63488 00:21:03.312 }, 00:21:03.312 { 00:21:03.312 "name": null, 00:21:03.312 "uuid": "a36f157e-c6a9-5cfb-8761-09a0d698b9e2", 00:21:03.312 "is_configured": false, 00:21:03.312 "data_offset": 2048, 00:21:03.312 "data_size": 63488 00:21:03.312 } 00:21:03.312 ] 00:21:03.312 }' 00:21:03.312 00:34:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:03.312 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:21:03.877 00:34:57 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:21:03.877 00:34:57 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:04.149 [2024-04-24 00:34:57.705858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:04.149 [2024-04-24 00:34:57.706163] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.149 [2024-04-24 00:34:57.706308] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:04.149 [2024-04-24 00:34:57.706410] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.149 [2024-04-24 00:34:57.707039] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.149 [2024-04-24 00:34:57.707214] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:04.149 [2024-04-24 00:34:57.707442] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:04.149 [2024-04-24 00:34:57.707576] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:04.149 pt2 00:21:04.149 00:34:57 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:04.408 [2024-04-24 00:34:57.969997] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:04.408 00:34:57 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:21:04.408 00:34:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:04.408 00:34:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:04.408 00:34:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:04.408 00:34:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:04.408 00:34:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:04.408 00:34:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:04.408 00:34:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:04.408 00:34:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:04.408 00:34:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:04.408 00:34:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.408 00:34:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.666 00:34:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:04.666 "name": "raid_bdev1", 00:21:04.666 "uuid": "287b7ba0-608c-48bd-aea6-7a02be349792", 00:21:04.666 "strip_size_kb": 64, 00:21:04.666 "state": "configuring", 00:21:04.666 "raid_level": "raid0", 00:21:04.666 "superblock": true, 00:21:04.666 "num_base_bdevs": 4, 00:21:04.666 "num_base_bdevs_discovered": 1, 00:21:04.666 "num_base_bdevs_operational": 4, 00:21:04.666 "base_bdevs_list": [ 00:21:04.666 { 00:21:04.666 "name": "pt1", 00:21:04.666 "uuid": "b6c97b05-1321-5e83-b219-08ad4a708930", 00:21:04.666 "is_configured": true, 00:21:04.666 "data_offset": 2048, 00:21:04.666 "data_size": 63488 00:21:04.666 }, 00:21:04.666 { 00:21:04.666 "name": null, 00:21:04.666 "uuid": "7e6319ab-7b02-574a-af3e-59c6d0820d48", 00:21:04.666 "is_configured": false, 00:21:04.666 "data_offset": 2048, 00:21:04.666 "data_size": 63488 00:21:04.666 }, 00:21:04.666 { 00:21:04.666 "name": null, 00:21:04.666 "uuid": "37153d95-362a-51de-bd5b-040630a85aab", 00:21:04.666 "is_configured": false, 00:21:04.666 "data_offset": 2048, 00:21:04.666 "data_size": 63488 00:21:04.666 }, 00:21:04.666 { 00:21:04.666 "name": null, 00:21:04.666 "uuid": "a36f157e-c6a9-5cfb-8761-09a0d698b9e2", 00:21:04.666 "is_configured": false, 00:21:04.666 "data_offset": 2048, 00:21:04.666 "data_size": 63488 00:21:04.666 } 00:21:04.666 ] 00:21:04.666 }' 00:21:04.666 00:34:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:04.666 00:34:58 -- common/autotest_common.sh@10 -- # set +x 00:21:05.232 00:34:58 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:21:05.232 00:34:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:05.232 00:34:58 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:05.490 [2024-04-24 00:34:59.042218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:05.490 [2024-04-24 00:34:59.042512] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.490 [2024-04-24 00:34:59.042593] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:05.490 [2024-04-24 00:34:59.042727] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.490 [2024-04-24 00:34:59.043278] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.490 [2024-04-24 00:34:59.043460] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:05.490 [2024-04-24 00:34:59.043667] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:05.490 [2024-04-24 00:34:59.043772] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:05.490 pt2 00:21:05.490 00:34:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:05.490 00:34:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:05.490 00:34:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:05.490 [2024-04-24 00:34:59.270220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:05.490 [2024-04-24 00:34:59.270445] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.490 [2024-04-24 00:34:59.270593] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:05.490 [2024-04-24 00:34:59.270691] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.490 [2024-04-24 00:34:59.271274] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.490 [2024-04-24 00:34:59.271448] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:05.490 [2024-04-24 00:34:59.271661] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:05.490 [2024-04-24 00:34:59.271780] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:05.490 pt3 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:05.750 [2024-04-24 00:34:59.510343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:05.750 [2024-04-24 00:34:59.510657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.750 [2024-04-24 00:34:59.510803] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:05.750 [2024-04-24 00:34:59.510909] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.750 [2024-04-24 00:34:59.511482] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.750 [2024-04-24 00:34:59.511662] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:05.750 [2024-04-24 00:34:59.511875] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:21:05.750 [2024-04-24 00:34:59.511989] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:05.750 [2024-04-24 00:34:59.512233] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:21:05.750 [2024-04-24 00:34:59.512343] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:05.750 [2024-04-24 00:34:59.512530] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:05.750 [2024-04-24 00:34:59.512950] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:21:05.750 [2024-04-24 00:34:59.513066] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:21:05.750 [2024-04-24 00:34:59.513297] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.750 pt4 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.750 00:34:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.007 00:34:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:06.007 "name": "raid_bdev1", 00:21:06.007 "uuid": "287b7ba0-608c-48bd-aea6-7a02be349792", 00:21:06.007 "strip_size_kb": 64, 00:21:06.007 "state": "online", 00:21:06.007 "raid_level": "raid0", 00:21:06.007 "superblock": true, 00:21:06.007 "num_base_bdevs": 4, 00:21:06.007 "num_base_bdevs_discovered": 4, 00:21:06.007 "num_base_bdevs_operational": 4, 00:21:06.007 "base_bdevs_list": [ 00:21:06.007 { 00:21:06.007 "name": "pt1", 00:21:06.007 "uuid": "b6c97b05-1321-5e83-b219-08ad4a708930", 00:21:06.007 "is_configured": true, 00:21:06.007 "data_offset": 2048, 00:21:06.007 "data_size": 63488 00:21:06.008 }, 00:21:06.008 { 00:21:06.008 "name": "pt2", 00:21:06.008 "uuid": "7e6319ab-7b02-574a-af3e-59c6d0820d48", 00:21:06.008 "is_configured": true, 00:21:06.008 "data_offset": 2048, 00:21:06.008 "data_size": 63488 00:21:06.008 }, 00:21:06.008 { 00:21:06.008 "name": "pt3", 00:21:06.008 "uuid": "37153d95-362a-51de-bd5b-040630a85aab", 00:21:06.008 "is_configured": true, 00:21:06.008 "data_offset": 2048, 00:21:06.008 "data_size": 63488 00:21:06.008 }, 00:21:06.008 { 00:21:06.008 "name": "pt4", 00:21:06.008 "uuid": "a36f157e-c6a9-5cfb-8761-09a0d698b9e2", 00:21:06.008 "is_configured": true, 00:21:06.008 "data_offset": 2048, 00:21:06.008 "data_size": 63488 00:21:06.008 } 00:21:06.008 ] 00:21:06.008 }' 00:21:06.008 00:34:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:06.008 00:34:59 -- common/autotest_common.sh@10 -- # set +x 00:21:06.630 00:35:00 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:06.630 00:35:00 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:21:06.887 [2024-04-24 00:35:00.598878] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.887 00:35:00 -- bdev/bdev_raid.sh@430 -- # '[' 287b7ba0-608c-48bd-aea6-7a02be349792 '!=' 287b7ba0-608c-48bd-aea6-7a02be349792 ']' 00:21:06.887 00:35:00 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:21:06.887 00:35:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:06.887 00:35:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:06.887 00:35:00 -- bdev/bdev_raid.sh@511 -- # killprocess 127951 00:21:06.887 00:35:00 -- common/autotest_common.sh@936 -- # '[' -z 127951 ']' 00:21:06.887 00:35:00 -- common/autotest_common.sh@940 -- # kill -0 127951 00:21:06.887 00:35:00 -- common/autotest_common.sh@941 -- # uname 00:21:06.887 00:35:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:06.887 00:35:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127951 00:21:06.887 00:35:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:06.887 00:35:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:06.887 00:35:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127951' 00:21:06.887 killing process with pid 127951 00:21:06.887 00:35:00 -- common/autotest_common.sh@955 -- # kill 127951 00:21:06.888 00:35:00 -- common/autotest_common.sh@960 -- # wait 127951 00:21:06.888 [2024-04-24 00:35:00.643796] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:06.888 [2024-04-24 00:35:00.643874] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.888 [2024-04-24 00:35:00.643943] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:06.888 [2024-04-24 00:35:00.644092] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:21:07.453 [2024-04-24 00:35:01.092487] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:08.859 ************************************ 00:21:08.859 END TEST raid_superblock_test 00:21:08.859 ************************************ 00:21:08.859 00:35:02 -- bdev/bdev_raid.sh@513 -- # return 0 00:21:08.859 00:21:08.859 real 0m12.652s 00:21:08.859 user 0m21.262s 00:21:08.859 sys 0m1.686s 00:21:08.859 00:35:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:08.859 00:35:02 -- common/autotest_common.sh@10 -- # set +x 00:21:08.859 00:35:02 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:21:08.859 00:35:02 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:21:08.859 00:35:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:08.859 00:35:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:08.859 00:35:02 -- common/autotest_common.sh@10 -- # set +x 00:21:08.859 ************************************ 00:21:08.859 START TEST raid_state_function_test 00:21:08.859 ************************************ 00:21:08.859 00:35:02 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 4 false 00:21:08.859 00:35:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:21:08.859 00:35:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=128287 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:09.125 Process raid pid: 128287 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128287' 00:21:09.125 00:35:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128287 /var/tmp/spdk-raid.sock 00:21:09.125 00:35:02 -- common/autotest_common.sh@817 -- # '[' -z 128287 ']' 00:21:09.125 00:35:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:09.125 00:35:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:09.125 00:35:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:09.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:09.125 00:35:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:09.125 00:35:02 -- common/autotest_common.sh@10 -- # set +x 00:21:09.125 [2024-04-24 00:35:02.724053] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:21:09.125 [2024-04-24 00:35:02.724489] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.125 [2024-04-24 00:35:02.908346] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.381 [2024-04-24 00:35:03.156180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.639 [2024-04-24 00:35:03.393977] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:10.206 00:35:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:10.206 00:35:03 -- common/autotest_common.sh@850 -- # return 0 00:21:10.206 00:35:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:10.463 [2024-04-24 00:35:03.998991] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:10.463 [2024-04-24 00:35:03.999261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:10.463 [2024-04-24 00:35:03.999371] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:10.463 [2024-04-24 00:35:03.999431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:10.463 [2024-04-24 00:35:03.999580] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:10.463 [2024-04-24 00:35:03.999662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:10.463 [2024-04-24 00:35:03.999795] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:10.463 [2024-04-24 00:35:03.999855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:10.463 00:35:04 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:10.463 00:35:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:10.463 00:35:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:10.463 00:35:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:10.463 00:35:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:10.463 00:35:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:10.463 00:35:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:10.463 00:35:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:10.463 00:35:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:10.463 00:35:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:10.463 00:35:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.463 00:35:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.722 00:35:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:10.722 "name": "Existed_Raid", 00:21:10.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.722 "strip_size_kb": 64, 00:21:10.722 "state": "configuring", 00:21:10.722 "raid_level": "concat", 00:21:10.722 "superblock": false, 00:21:10.722 "num_base_bdevs": 4, 00:21:10.722 "num_base_bdevs_discovered": 0, 00:21:10.722 "num_base_bdevs_operational": 4, 00:21:10.722 "base_bdevs_list": [ 00:21:10.722 { 00:21:10.722 "name": "BaseBdev1", 00:21:10.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.722 "is_configured": false, 00:21:10.722 "data_offset": 0, 00:21:10.722 "data_size": 0 00:21:10.722 }, 00:21:10.722 { 00:21:10.722 "name": "BaseBdev2", 00:21:10.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.722 "is_configured": false, 00:21:10.722 "data_offset": 0, 00:21:10.722 "data_size": 0 00:21:10.722 }, 00:21:10.722 { 00:21:10.722 "name": "BaseBdev3", 00:21:10.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.722 "is_configured": false, 00:21:10.722 "data_offset": 0, 00:21:10.722 "data_size": 0 00:21:10.722 }, 00:21:10.722 { 00:21:10.722 "name": "BaseBdev4", 00:21:10.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.722 "is_configured": false, 00:21:10.722 "data_offset": 0, 00:21:10.722 "data_size": 0 00:21:10.722 } 00:21:10.722 ] 00:21:10.722 }' 00:21:10.722 00:35:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:10.722 00:35:04 -- common/autotest_common.sh@10 -- # set +x 00:21:11.287 00:35:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:11.545 [2024-04-24 00:35:05.243149] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:11.545 [2024-04-24 00:35:05.243428] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:21:11.545 00:35:05 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:11.802 [2024-04-24 00:35:05.539277] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:11.802 [2024-04-24 00:35:05.539647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:11.802 [2024-04-24 00:35:05.539762] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:11.802 [2024-04-24 00:35:05.539840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:11.802 [2024-04-24 00:35:05.540004] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:11.802 [2024-04-24 00:35:05.540096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:11.802 [2024-04-24 00:35:05.540237] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:11.802 [2024-04-24 00:35:05.540309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:11.802 00:35:05 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:12.060 [2024-04-24 00:35:05.820350] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:12.060 BaseBdev1 00:21:12.060 00:35:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:12.060 00:35:05 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:21:12.060 00:35:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:12.060 00:35:05 -- common/autotest_common.sh@887 -- # local i 00:21:12.060 00:35:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:12.060 00:35:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:12.060 00:35:05 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:12.318 00:35:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:12.655 [ 00:21:12.655 { 00:21:12.655 "name": "BaseBdev1", 00:21:12.655 "aliases": [ 00:21:12.655 "e4d83020-64c8-4060-be54-7cb68ea06ef0" 00:21:12.655 ], 00:21:12.655 "product_name": "Malloc disk", 00:21:12.655 "block_size": 512, 00:21:12.655 "num_blocks": 65536, 00:21:12.655 "uuid": "e4d83020-64c8-4060-be54-7cb68ea06ef0", 00:21:12.655 "assigned_rate_limits": { 00:21:12.655 "rw_ios_per_sec": 0, 00:21:12.655 "rw_mbytes_per_sec": 0, 00:21:12.655 "r_mbytes_per_sec": 0, 00:21:12.655 "w_mbytes_per_sec": 0 00:21:12.655 }, 00:21:12.655 "claimed": true, 00:21:12.655 "claim_type": "exclusive_write", 00:21:12.655 "zoned": false, 00:21:12.655 "supported_io_types": { 00:21:12.655 "read": true, 00:21:12.655 "write": true, 00:21:12.655 "unmap": true, 00:21:12.655 "write_zeroes": true, 00:21:12.655 "flush": true, 00:21:12.655 "reset": true, 00:21:12.655 "compare": false, 00:21:12.655 "compare_and_write": false, 00:21:12.655 "abort": true, 00:21:12.655 "nvme_admin": false, 00:21:12.655 "nvme_io": false 00:21:12.655 }, 00:21:12.655 "memory_domains": [ 00:21:12.655 { 00:21:12.655 "dma_device_id": "system", 00:21:12.655 "dma_device_type": 1 00:21:12.655 }, 00:21:12.655 { 00:21:12.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.655 "dma_device_type": 2 00:21:12.655 } 00:21:12.655 ], 00:21:12.655 "driver_specific": {} 00:21:12.655 } 00:21:12.655 ] 00:21:12.655 00:35:06 -- common/autotest_common.sh@893 -- # return 0 00:21:12.655 00:35:06 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:12.655 00:35:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:12.655 00:35:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:12.655 00:35:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:12.655 00:35:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:12.655 00:35:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:12.655 00:35:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:12.655 00:35:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:12.655 00:35:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:12.655 00:35:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:12.655 00:35:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.655 00:35:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.913 00:35:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:12.913 "name": "Existed_Raid", 00:21:12.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.913 "strip_size_kb": 64, 00:21:12.913 "state": "configuring", 00:21:12.913 "raid_level": "concat", 00:21:12.913 "superblock": false, 00:21:12.913 "num_base_bdevs": 4, 00:21:12.913 "num_base_bdevs_discovered": 1, 00:21:12.913 "num_base_bdevs_operational": 4, 00:21:12.913 "base_bdevs_list": [ 00:21:12.913 { 00:21:12.913 "name": "BaseBdev1", 00:21:12.913 "uuid": "e4d83020-64c8-4060-be54-7cb68ea06ef0", 00:21:12.913 "is_configured": true, 00:21:12.913 "data_offset": 0, 00:21:12.913 "data_size": 65536 00:21:12.913 }, 00:21:12.913 { 00:21:12.913 "name": "BaseBdev2", 00:21:12.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.913 "is_configured": false, 00:21:12.913 "data_offset": 0, 00:21:12.913 "data_size": 0 00:21:12.913 }, 00:21:12.913 { 00:21:12.913 "name": "BaseBdev3", 00:21:12.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.913 "is_configured": false, 00:21:12.913 "data_offset": 0, 00:21:12.913 "data_size": 0 00:21:12.913 }, 00:21:12.913 { 00:21:12.913 "name": "BaseBdev4", 00:21:12.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.913 "is_configured": false, 00:21:12.913 "data_offset": 0, 00:21:12.913 "data_size": 0 00:21:12.913 } 00:21:12.913 ] 00:21:12.913 }' 00:21:12.913 00:35:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:12.913 00:35:06 -- common/autotest_common.sh@10 -- # set +x 00:21:13.477 00:35:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:13.735 [2024-04-24 00:35:07.488803] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:13.735 [2024-04-24 00:35:07.489098] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:21:13.735 00:35:07 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:21:13.735 00:35:07 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:13.994 [2024-04-24 00:35:07.736911] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:13.994 [2024-04-24 00:35:07.739432] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:13.994 [2024-04-24 00:35:07.739657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:13.994 [2024-04-24 00:35:07.739759] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:13.994 [2024-04-24 00:35:07.739826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:13.994 [2024-04-24 00:35:07.739960] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:13.994 [2024-04-24 00:35:07.740022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.994 00:35:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.252 00:35:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:14.252 "name": "Existed_Raid", 00:21:14.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.252 "strip_size_kb": 64, 00:21:14.252 "state": "configuring", 00:21:14.252 "raid_level": "concat", 00:21:14.252 "superblock": false, 00:21:14.252 "num_base_bdevs": 4, 00:21:14.252 "num_base_bdevs_discovered": 1, 00:21:14.252 "num_base_bdevs_operational": 4, 00:21:14.252 "base_bdevs_list": [ 00:21:14.252 { 00:21:14.252 "name": "BaseBdev1", 00:21:14.252 "uuid": "e4d83020-64c8-4060-be54-7cb68ea06ef0", 00:21:14.252 "is_configured": true, 00:21:14.252 "data_offset": 0, 00:21:14.252 "data_size": 65536 00:21:14.252 }, 00:21:14.252 { 00:21:14.252 "name": "BaseBdev2", 00:21:14.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.252 "is_configured": false, 00:21:14.252 "data_offset": 0, 00:21:14.252 "data_size": 0 00:21:14.252 }, 00:21:14.252 { 00:21:14.252 "name": "BaseBdev3", 00:21:14.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.252 "is_configured": false, 00:21:14.252 "data_offset": 0, 00:21:14.252 "data_size": 0 00:21:14.252 }, 00:21:14.252 { 00:21:14.252 "name": "BaseBdev4", 00:21:14.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.252 "is_configured": false, 00:21:14.252 "data_offset": 0, 00:21:14.252 "data_size": 0 00:21:14.252 } 00:21:14.252 ] 00:21:14.252 }' 00:21:14.252 00:35:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:14.252 00:35:08 -- common/autotest_common.sh@10 -- # set +x 00:21:15.234 00:35:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:15.492 [2024-04-24 00:35:09.050306] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:15.492 BaseBdev2 00:21:15.492 00:35:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:15.492 00:35:09 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:21:15.492 00:35:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:15.492 00:35:09 -- common/autotest_common.sh@887 -- # local i 00:21:15.492 00:35:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:15.492 00:35:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:15.492 00:35:09 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:15.492 00:35:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:15.749 [ 00:21:15.750 { 00:21:15.750 "name": "BaseBdev2", 00:21:15.750 "aliases": [ 00:21:15.750 "a9cc1da6-d9ec-4a35-b11d-0f5595de4f17" 00:21:15.750 ], 00:21:15.750 "product_name": "Malloc disk", 00:21:15.750 "block_size": 512, 00:21:15.750 "num_blocks": 65536, 00:21:15.750 "uuid": "a9cc1da6-d9ec-4a35-b11d-0f5595de4f17", 00:21:15.750 "assigned_rate_limits": { 00:21:15.750 "rw_ios_per_sec": 0, 00:21:15.750 "rw_mbytes_per_sec": 0, 00:21:15.750 "r_mbytes_per_sec": 0, 00:21:15.750 "w_mbytes_per_sec": 0 00:21:15.750 }, 00:21:15.750 "claimed": true, 00:21:15.750 "claim_type": "exclusive_write", 00:21:15.750 "zoned": false, 00:21:15.750 "supported_io_types": { 00:21:15.750 "read": true, 00:21:15.750 "write": true, 00:21:15.750 "unmap": true, 00:21:15.750 "write_zeroes": true, 00:21:15.750 "flush": true, 00:21:15.750 "reset": true, 00:21:15.750 "compare": false, 00:21:15.750 "compare_and_write": false, 00:21:15.750 "abort": true, 00:21:15.750 "nvme_admin": false, 00:21:15.750 "nvme_io": false 00:21:15.750 }, 00:21:15.750 "memory_domains": [ 00:21:15.750 { 00:21:15.750 "dma_device_id": "system", 00:21:15.750 "dma_device_type": 1 00:21:15.750 }, 00:21:15.750 { 00:21:15.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.750 "dma_device_type": 2 00:21:15.750 } 00:21:15.750 ], 00:21:15.750 "driver_specific": {} 00:21:15.750 } 00:21:15.750 ] 00:21:15.750 00:35:09 -- common/autotest_common.sh@893 -- # return 0 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.750 00:35:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.314 00:35:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:16.314 "name": "Existed_Raid", 00:21:16.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.314 "strip_size_kb": 64, 00:21:16.314 "state": "configuring", 00:21:16.314 "raid_level": "concat", 00:21:16.314 "superblock": false, 00:21:16.314 "num_base_bdevs": 4, 00:21:16.314 "num_base_bdevs_discovered": 2, 00:21:16.314 "num_base_bdevs_operational": 4, 00:21:16.314 "base_bdevs_list": [ 00:21:16.314 { 00:21:16.314 "name": "BaseBdev1", 00:21:16.314 "uuid": "e4d83020-64c8-4060-be54-7cb68ea06ef0", 00:21:16.314 "is_configured": true, 00:21:16.314 "data_offset": 0, 00:21:16.314 "data_size": 65536 00:21:16.314 }, 00:21:16.314 { 00:21:16.314 "name": "BaseBdev2", 00:21:16.314 "uuid": "a9cc1da6-d9ec-4a35-b11d-0f5595de4f17", 00:21:16.314 "is_configured": true, 00:21:16.314 "data_offset": 0, 00:21:16.314 "data_size": 65536 00:21:16.314 }, 00:21:16.314 { 00:21:16.314 "name": "BaseBdev3", 00:21:16.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.314 "is_configured": false, 00:21:16.314 "data_offset": 0, 00:21:16.314 "data_size": 0 00:21:16.314 }, 00:21:16.314 { 00:21:16.314 "name": "BaseBdev4", 00:21:16.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.314 "is_configured": false, 00:21:16.314 "data_offset": 0, 00:21:16.314 "data_size": 0 00:21:16.314 } 00:21:16.314 ] 00:21:16.314 }' 00:21:16.314 00:35:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:16.314 00:35:09 -- common/autotest_common.sh@10 -- # set +x 00:21:16.949 00:35:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:16.949 [2024-04-24 00:35:10.735106] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:16.949 BaseBdev3 00:21:17.207 00:35:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:17.207 00:35:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:21:17.207 00:35:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:17.207 00:35:10 -- common/autotest_common.sh@887 -- # local i 00:21:17.207 00:35:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:17.207 00:35:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:17.207 00:35:10 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:17.207 00:35:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:17.467 [ 00:21:17.467 { 00:21:17.467 "name": "BaseBdev3", 00:21:17.467 "aliases": [ 00:21:17.467 "ff2429be-9cb1-40f1-870f-64cd91b05c3e" 00:21:17.467 ], 00:21:17.467 "product_name": "Malloc disk", 00:21:17.467 "block_size": 512, 00:21:17.467 "num_blocks": 65536, 00:21:17.467 "uuid": "ff2429be-9cb1-40f1-870f-64cd91b05c3e", 00:21:17.467 "assigned_rate_limits": { 00:21:17.467 "rw_ios_per_sec": 0, 00:21:17.467 "rw_mbytes_per_sec": 0, 00:21:17.467 "r_mbytes_per_sec": 0, 00:21:17.467 "w_mbytes_per_sec": 0 00:21:17.467 }, 00:21:17.467 "claimed": true, 00:21:17.467 "claim_type": "exclusive_write", 00:21:17.467 "zoned": false, 00:21:17.467 "supported_io_types": { 00:21:17.467 "read": true, 00:21:17.467 "write": true, 00:21:17.467 "unmap": true, 00:21:17.467 "write_zeroes": true, 00:21:17.467 "flush": true, 00:21:17.467 "reset": true, 00:21:17.467 "compare": false, 00:21:17.467 "compare_and_write": false, 00:21:17.467 "abort": true, 00:21:17.467 "nvme_admin": false, 00:21:17.467 "nvme_io": false 00:21:17.467 }, 00:21:17.467 "memory_domains": [ 00:21:17.467 { 00:21:17.467 "dma_device_id": "system", 00:21:17.467 "dma_device_type": 1 00:21:17.467 }, 00:21:17.467 { 00:21:17.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.467 "dma_device_type": 2 00:21:17.467 } 00:21:17.467 ], 00:21:17.467 "driver_specific": {} 00:21:17.467 } 00:21:17.467 ] 00:21:17.467 00:35:11 -- common/autotest_common.sh@893 -- # return 0 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.467 00:35:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.724 00:35:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:17.724 "name": "Existed_Raid", 00:21:17.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.724 "strip_size_kb": 64, 00:21:17.724 "state": "configuring", 00:21:17.724 "raid_level": "concat", 00:21:17.724 "superblock": false, 00:21:17.724 "num_base_bdevs": 4, 00:21:17.724 "num_base_bdevs_discovered": 3, 00:21:17.724 "num_base_bdevs_operational": 4, 00:21:17.724 "base_bdevs_list": [ 00:21:17.724 { 00:21:17.724 "name": "BaseBdev1", 00:21:17.724 "uuid": "e4d83020-64c8-4060-be54-7cb68ea06ef0", 00:21:17.724 "is_configured": true, 00:21:17.724 "data_offset": 0, 00:21:17.724 "data_size": 65536 00:21:17.724 }, 00:21:17.724 { 00:21:17.724 "name": "BaseBdev2", 00:21:17.724 "uuid": "a9cc1da6-d9ec-4a35-b11d-0f5595de4f17", 00:21:17.724 "is_configured": true, 00:21:17.724 "data_offset": 0, 00:21:17.724 "data_size": 65536 00:21:17.724 }, 00:21:17.724 { 00:21:17.724 "name": "BaseBdev3", 00:21:17.724 "uuid": "ff2429be-9cb1-40f1-870f-64cd91b05c3e", 00:21:17.724 "is_configured": true, 00:21:17.724 "data_offset": 0, 00:21:17.724 "data_size": 65536 00:21:17.725 }, 00:21:17.725 { 00:21:17.725 "name": "BaseBdev4", 00:21:17.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.725 "is_configured": false, 00:21:17.725 "data_offset": 0, 00:21:17.725 "data_size": 0 00:21:17.725 } 00:21:17.725 ] 00:21:17.725 }' 00:21:17.725 00:35:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:17.725 00:35:11 -- common/autotest_common.sh@10 -- # set +x 00:21:18.658 00:35:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:18.658 [2024-04-24 00:35:12.387757] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:18.658 [2024-04-24 00:35:12.388037] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:21:18.658 [2024-04-24 00:35:12.388084] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:18.658 [2024-04-24 00:35:12.388292] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:21:18.658 [2024-04-24 00:35:12.388764] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:21:18.658 [2024-04-24 00:35:12.388903] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:21:18.658 [2024-04-24 00:35:12.389313] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.658 BaseBdev4 00:21:18.658 00:35:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:21:18.658 00:35:12 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:21:18.658 00:35:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:18.658 00:35:12 -- common/autotest_common.sh@887 -- # local i 00:21:18.658 00:35:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:18.658 00:35:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:18.658 00:35:12 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:18.974 00:35:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:19.233 [ 00:21:19.233 { 00:21:19.233 "name": "BaseBdev4", 00:21:19.233 "aliases": [ 00:21:19.233 "6e5ebf7b-55d8-4c69-8aa5-8a901d377260" 00:21:19.233 ], 00:21:19.233 "product_name": "Malloc disk", 00:21:19.233 "block_size": 512, 00:21:19.233 "num_blocks": 65536, 00:21:19.233 "uuid": "6e5ebf7b-55d8-4c69-8aa5-8a901d377260", 00:21:19.233 "assigned_rate_limits": { 00:21:19.233 "rw_ios_per_sec": 0, 00:21:19.233 "rw_mbytes_per_sec": 0, 00:21:19.233 "r_mbytes_per_sec": 0, 00:21:19.233 "w_mbytes_per_sec": 0 00:21:19.233 }, 00:21:19.233 "claimed": true, 00:21:19.233 "claim_type": "exclusive_write", 00:21:19.233 "zoned": false, 00:21:19.233 "supported_io_types": { 00:21:19.233 "read": true, 00:21:19.233 "write": true, 00:21:19.233 "unmap": true, 00:21:19.233 "write_zeroes": true, 00:21:19.233 "flush": true, 00:21:19.233 "reset": true, 00:21:19.233 "compare": false, 00:21:19.233 "compare_and_write": false, 00:21:19.233 "abort": true, 00:21:19.233 "nvme_admin": false, 00:21:19.233 "nvme_io": false 00:21:19.233 }, 00:21:19.233 "memory_domains": [ 00:21:19.233 { 00:21:19.233 "dma_device_id": "system", 00:21:19.233 "dma_device_type": 1 00:21:19.233 }, 00:21:19.233 { 00:21:19.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.233 "dma_device_type": 2 00:21:19.233 } 00:21:19.233 ], 00:21:19.233 "driver_specific": {} 00:21:19.233 } 00:21:19.233 ] 00:21:19.233 00:35:12 -- common/autotest_common.sh@893 -- # return 0 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.233 00:35:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.490 00:35:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:19.490 "name": "Existed_Raid", 00:21:19.490 "uuid": "20cd3f23-4199-4ee8-9bff-a899dab02003", 00:21:19.490 "strip_size_kb": 64, 00:21:19.490 "state": "online", 00:21:19.490 "raid_level": "concat", 00:21:19.490 "superblock": false, 00:21:19.490 "num_base_bdevs": 4, 00:21:19.490 "num_base_bdevs_discovered": 4, 00:21:19.490 "num_base_bdevs_operational": 4, 00:21:19.490 "base_bdevs_list": [ 00:21:19.490 { 00:21:19.490 "name": "BaseBdev1", 00:21:19.490 "uuid": "e4d83020-64c8-4060-be54-7cb68ea06ef0", 00:21:19.490 "is_configured": true, 00:21:19.490 "data_offset": 0, 00:21:19.490 "data_size": 65536 00:21:19.490 }, 00:21:19.491 { 00:21:19.491 "name": "BaseBdev2", 00:21:19.491 "uuid": "a9cc1da6-d9ec-4a35-b11d-0f5595de4f17", 00:21:19.491 "is_configured": true, 00:21:19.491 "data_offset": 0, 00:21:19.491 "data_size": 65536 00:21:19.491 }, 00:21:19.491 { 00:21:19.491 "name": "BaseBdev3", 00:21:19.491 "uuid": "ff2429be-9cb1-40f1-870f-64cd91b05c3e", 00:21:19.491 "is_configured": true, 00:21:19.491 "data_offset": 0, 00:21:19.491 "data_size": 65536 00:21:19.491 }, 00:21:19.491 { 00:21:19.491 "name": "BaseBdev4", 00:21:19.491 "uuid": "6e5ebf7b-55d8-4c69-8aa5-8a901d377260", 00:21:19.491 "is_configured": true, 00:21:19.491 "data_offset": 0, 00:21:19.491 "data_size": 65536 00:21:19.491 } 00:21:19.491 ] 00:21:19.491 }' 00:21:19.491 00:35:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:19.491 00:35:13 -- common/autotest_common.sh@10 -- # set +x 00:21:20.055 00:35:13 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:20.313 [2024-04-24 00:35:13.864235] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:20.313 [2024-04-24 00:35:13.864485] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:20.313 [2024-04-24 00:35:13.864648] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:20.313 00:35:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:20.313 00:35:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:21:20.313 00:35:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:20.313 00:35:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:20.313 00:35:13 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:21:20.314 00:35:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:20.314 00:35:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:20.314 00:35:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:21:20.314 00:35:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:20.314 00:35:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:20.314 00:35:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:20.314 00:35:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:20.314 00:35:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:20.314 00:35:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:20.314 00:35:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:20.314 00:35:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.314 00:35:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.572 00:35:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:20.572 "name": "Existed_Raid", 00:21:20.572 "uuid": "20cd3f23-4199-4ee8-9bff-a899dab02003", 00:21:20.572 "strip_size_kb": 64, 00:21:20.572 "state": "offline", 00:21:20.572 "raid_level": "concat", 00:21:20.572 "superblock": false, 00:21:20.572 "num_base_bdevs": 4, 00:21:20.572 "num_base_bdevs_discovered": 3, 00:21:20.572 "num_base_bdevs_operational": 3, 00:21:20.572 "base_bdevs_list": [ 00:21:20.572 { 00:21:20.572 "name": null, 00:21:20.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.572 "is_configured": false, 00:21:20.572 "data_offset": 0, 00:21:20.572 "data_size": 65536 00:21:20.572 }, 00:21:20.572 { 00:21:20.572 "name": "BaseBdev2", 00:21:20.572 "uuid": "a9cc1da6-d9ec-4a35-b11d-0f5595de4f17", 00:21:20.572 "is_configured": true, 00:21:20.572 "data_offset": 0, 00:21:20.572 "data_size": 65536 00:21:20.572 }, 00:21:20.572 { 00:21:20.572 "name": "BaseBdev3", 00:21:20.572 "uuid": "ff2429be-9cb1-40f1-870f-64cd91b05c3e", 00:21:20.572 "is_configured": true, 00:21:20.572 "data_offset": 0, 00:21:20.572 "data_size": 65536 00:21:20.572 }, 00:21:20.572 { 00:21:20.572 "name": "BaseBdev4", 00:21:20.572 "uuid": "6e5ebf7b-55d8-4c69-8aa5-8a901d377260", 00:21:20.572 "is_configured": true, 00:21:20.572 "data_offset": 0, 00:21:20.572 "data_size": 65536 00:21:20.572 } 00:21:20.572 ] 00:21:20.572 }' 00:21:20.572 00:35:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:20.572 00:35:14 -- common/autotest_common.sh@10 -- # set +x 00:21:21.169 00:35:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:21.170 00:35:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:21.170 00:35:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:21.170 00:35:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.427 00:35:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:21.427 00:35:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:21.427 00:35:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:21.684 [2024-04-24 00:35:15.250870] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:21.684 00:35:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:21.684 00:35:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:21.684 00:35:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.684 00:35:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:21.941 00:35:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:21.941 00:35:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:21.941 00:35:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:22.198 [2024-04-24 00:35:15.922641] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:22.456 00:35:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:22.456 00:35:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:22.456 00:35:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:22.456 00:35:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.714 00:35:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:22.714 00:35:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:22.714 00:35:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:22.714 [2024-04-24 00:35:16.497925] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:22.714 [2024-04-24 00:35:16.498232] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:21:22.972 00:35:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:22.972 00:35:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:22.972 00:35:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.972 00:35:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:23.230 00:35:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:23.230 00:35:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:23.230 00:35:16 -- bdev/bdev_raid.sh@287 -- # killprocess 128287 00:21:23.230 00:35:16 -- common/autotest_common.sh@936 -- # '[' -z 128287 ']' 00:21:23.230 00:35:16 -- common/autotest_common.sh@940 -- # kill -0 128287 00:21:23.230 00:35:16 -- common/autotest_common.sh@941 -- # uname 00:21:23.230 00:35:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:23.230 00:35:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128287 00:21:23.230 00:35:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:23.230 00:35:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:23.230 00:35:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128287' 00:21:23.230 killing process with pid 128287 00:21:23.230 00:35:16 -- common/autotest_common.sh@955 -- # kill 128287 00:21:23.230 [2024-04-24 00:35:16.892774] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:23.230 00:35:16 -- common/autotest_common.sh@960 -- # wait 128287 00:21:23.230 [2024-04-24 00:35:16.893101] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:24.601 00:35:18 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:24.601 00:21:24.601 real 0m15.705s 00:21:24.601 user 0m27.048s 00:21:24.601 sys 0m2.261s 00:21:24.601 00:35:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:24.601 00:35:18 -- common/autotest_common.sh@10 -- # set +x 00:21:24.601 ************************************ 00:21:24.601 END TEST raid_state_function_test 00:21:24.601 ************************************ 00:21:24.601 00:35:18 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:21:24.601 00:35:18 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:24.601 00:35:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:24.601 00:35:18 -- common/autotest_common.sh@10 -- # set +x 00:21:24.859 ************************************ 00:21:24.859 START TEST raid_state_function_test_sb 00:21:24.859 ************************************ 00:21:24.859 00:35:18 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 4 true 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@226 -- # raid_pid=128749 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128749' 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:24.859 Process raid pid: 128749 00:21:24.859 00:35:18 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128749 /var/tmp/spdk-raid.sock 00:21:24.859 00:35:18 -- common/autotest_common.sh@817 -- # '[' -z 128749 ']' 00:21:24.859 00:35:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:24.859 00:35:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:24.859 00:35:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:24.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:24.859 00:35:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:24.859 00:35:18 -- common/autotest_common.sh@10 -- # set +x 00:21:24.859 [2024-04-24 00:35:18.541124] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:21:24.859 [2024-04-24 00:35:18.541596] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.117 [2024-04-24 00:35:18.726668] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.375 [2024-04-24 00:35:18.963405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.661 [2024-04-24 00:35:19.201081] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:25.920 00:35:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:25.920 00:35:19 -- common/autotest_common.sh@850 -- # return 0 00:21:25.920 00:35:19 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:25.920 [2024-04-24 00:35:19.655517] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:25.920 [2024-04-24 00:35:19.655834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:25.920 [2024-04-24 00:35:19.655939] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:25.920 [2024-04-24 00:35:19.656001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:25.920 [2024-04-24 00:35:19.656241] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:25.920 [2024-04-24 00:35:19.656324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:25.920 [2024-04-24 00:35:19.656503] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:25.920 [2024-04-24 00:35:19.656563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:25.920 00:35:19 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:25.920 00:35:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:25.920 00:35:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:25.920 00:35:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:25.920 00:35:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:25.920 00:35:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:25.920 00:35:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:25.920 00:35:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:25.920 00:35:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:25.920 00:35:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:25.920 00:35:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.920 00:35:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.178 00:35:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:26.178 "name": "Existed_Raid", 00:21:26.178 "uuid": "0ecca83b-8c5f-48db-bed7-5531f520045a", 00:21:26.178 "strip_size_kb": 64, 00:21:26.178 "state": "configuring", 00:21:26.178 "raid_level": "concat", 00:21:26.178 "superblock": true, 00:21:26.178 "num_base_bdevs": 4, 00:21:26.178 "num_base_bdevs_discovered": 0, 00:21:26.178 "num_base_bdevs_operational": 4, 00:21:26.178 "base_bdevs_list": [ 00:21:26.178 { 00:21:26.178 "name": "BaseBdev1", 00:21:26.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.178 "is_configured": false, 00:21:26.178 "data_offset": 0, 00:21:26.178 "data_size": 0 00:21:26.178 }, 00:21:26.178 { 00:21:26.178 "name": "BaseBdev2", 00:21:26.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.178 "is_configured": false, 00:21:26.178 "data_offset": 0, 00:21:26.178 "data_size": 0 00:21:26.178 }, 00:21:26.178 { 00:21:26.178 "name": "BaseBdev3", 00:21:26.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.178 "is_configured": false, 00:21:26.178 "data_offset": 0, 00:21:26.178 "data_size": 0 00:21:26.178 }, 00:21:26.178 { 00:21:26.178 "name": "BaseBdev4", 00:21:26.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.178 "is_configured": false, 00:21:26.178 "data_offset": 0, 00:21:26.178 "data_size": 0 00:21:26.178 } 00:21:26.178 ] 00:21:26.178 }' 00:21:26.178 00:35:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:26.178 00:35:19 -- common/autotest_common.sh@10 -- # set +x 00:21:27.111 00:35:20 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:27.111 [2024-04-24 00:35:20.863613] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:27.111 [2024-04-24 00:35:20.863900] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:21:27.111 00:35:20 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:27.368 [2024-04-24 00:35:21.147737] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:27.368 [2024-04-24 00:35:21.148049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:27.368 [2024-04-24 00:35:21.148170] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:27.368 [2024-04-24 00:35:21.148244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:27.368 [2024-04-24 00:35:21.148441] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:27.368 [2024-04-24 00:35:21.148539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:27.368 [2024-04-24 00:35:21.148754] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:27.368 [2024-04-24 00:35:21.148825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:27.626 00:35:21 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:27.946 [2024-04-24 00:35:21.468961] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:27.946 BaseBdev1 00:21:27.946 00:35:21 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:27.946 00:35:21 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:21:27.946 00:35:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:27.946 00:35:21 -- common/autotest_common.sh@887 -- # local i 00:21:27.946 00:35:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:27.946 00:35:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:27.946 00:35:21 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:28.206 00:35:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:28.466 [ 00:21:28.466 { 00:21:28.466 "name": "BaseBdev1", 00:21:28.466 "aliases": [ 00:21:28.466 "ddf75594-1fab-4d7f-a819-5257b3490b2b" 00:21:28.466 ], 00:21:28.466 "product_name": "Malloc disk", 00:21:28.466 "block_size": 512, 00:21:28.466 "num_blocks": 65536, 00:21:28.466 "uuid": "ddf75594-1fab-4d7f-a819-5257b3490b2b", 00:21:28.466 "assigned_rate_limits": { 00:21:28.466 "rw_ios_per_sec": 0, 00:21:28.466 "rw_mbytes_per_sec": 0, 00:21:28.466 "r_mbytes_per_sec": 0, 00:21:28.466 "w_mbytes_per_sec": 0 00:21:28.466 }, 00:21:28.466 "claimed": true, 00:21:28.466 "claim_type": "exclusive_write", 00:21:28.466 "zoned": false, 00:21:28.466 "supported_io_types": { 00:21:28.466 "read": true, 00:21:28.466 "write": true, 00:21:28.466 "unmap": true, 00:21:28.466 "write_zeroes": true, 00:21:28.466 "flush": true, 00:21:28.466 "reset": true, 00:21:28.466 "compare": false, 00:21:28.466 "compare_and_write": false, 00:21:28.466 "abort": true, 00:21:28.466 "nvme_admin": false, 00:21:28.466 "nvme_io": false 00:21:28.466 }, 00:21:28.466 "memory_domains": [ 00:21:28.466 { 00:21:28.466 "dma_device_id": "system", 00:21:28.466 "dma_device_type": 1 00:21:28.466 }, 00:21:28.466 { 00:21:28.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.466 "dma_device_type": 2 00:21:28.466 } 00:21:28.466 ], 00:21:28.466 "driver_specific": {} 00:21:28.466 } 00:21:28.466 ] 00:21:28.466 00:35:22 -- common/autotest_common.sh@893 -- # return 0 00:21:28.466 00:35:22 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:28.466 00:35:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:28.466 00:35:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:28.466 00:35:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:28.466 00:35:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:28.466 00:35:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:28.466 00:35:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:28.466 00:35:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:28.466 00:35:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:28.466 00:35:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:28.466 00:35:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.466 00:35:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.727 00:35:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:28.727 "name": "Existed_Raid", 00:21:28.727 "uuid": "2d5a84e1-b9c4-4bb4-a217-0cd95604ac13", 00:21:28.727 "strip_size_kb": 64, 00:21:28.727 "state": "configuring", 00:21:28.727 "raid_level": "concat", 00:21:28.727 "superblock": true, 00:21:28.727 "num_base_bdevs": 4, 00:21:28.727 "num_base_bdevs_discovered": 1, 00:21:28.727 "num_base_bdevs_operational": 4, 00:21:28.727 "base_bdevs_list": [ 00:21:28.727 { 00:21:28.727 "name": "BaseBdev1", 00:21:28.727 "uuid": "ddf75594-1fab-4d7f-a819-5257b3490b2b", 00:21:28.727 "is_configured": true, 00:21:28.727 "data_offset": 2048, 00:21:28.727 "data_size": 63488 00:21:28.727 }, 00:21:28.727 { 00:21:28.727 "name": "BaseBdev2", 00:21:28.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.727 "is_configured": false, 00:21:28.727 "data_offset": 0, 00:21:28.727 "data_size": 0 00:21:28.727 }, 00:21:28.727 { 00:21:28.727 "name": "BaseBdev3", 00:21:28.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.727 "is_configured": false, 00:21:28.727 "data_offset": 0, 00:21:28.727 "data_size": 0 00:21:28.727 }, 00:21:28.727 { 00:21:28.727 "name": "BaseBdev4", 00:21:28.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.727 "is_configured": false, 00:21:28.727 "data_offset": 0, 00:21:28.727 "data_size": 0 00:21:28.727 } 00:21:28.727 ] 00:21:28.727 }' 00:21:28.727 00:35:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:28.727 00:35:22 -- common/autotest_common.sh@10 -- # set +x 00:21:29.305 00:35:23 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:29.564 [2024-04-24 00:35:23.305457] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:29.564 [2024-04-24 00:35:23.305759] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:21:29.564 00:35:23 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:21:29.564 00:35:23 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:30.129 00:35:23 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:30.386 BaseBdev1 00:21:30.386 00:35:23 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:21:30.386 00:35:23 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:21:30.386 00:35:23 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:30.386 00:35:23 -- common/autotest_common.sh@887 -- # local i 00:21:30.386 00:35:23 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:30.386 00:35:23 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:30.386 00:35:23 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:30.644 00:35:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:30.902 [ 00:21:30.902 { 00:21:30.902 "name": "BaseBdev1", 00:21:30.902 "aliases": [ 00:21:30.902 "c5a92749-28e5-4e88-9520-6f6a7c32ca1b" 00:21:30.902 ], 00:21:30.902 "product_name": "Malloc disk", 00:21:30.902 "block_size": 512, 00:21:30.902 "num_blocks": 65536, 00:21:30.902 "uuid": "c5a92749-28e5-4e88-9520-6f6a7c32ca1b", 00:21:30.902 "assigned_rate_limits": { 00:21:30.902 "rw_ios_per_sec": 0, 00:21:30.902 "rw_mbytes_per_sec": 0, 00:21:30.902 "r_mbytes_per_sec": 0, 00:21:30.902 "w_mbytes_per_sec": 0 00:21:30.902 }, 00:21:30.902 "claimed": false, 00:21:30.902 "zoned": false, 00:21:30.902 "supported_io_types": { 00:21:30.902 "read": true, 00:21:30.902 "write": true, 00:21:30.902 "unmap": true, 00:21:30.902 "write_zeroes": true, 00:21:30.902 "flush": true, 00:21:30.902 "reset": true, 00:21:30.902 "compare": false, 00:21:30.902 "compare_and_write": false, 00:21:30.902 "abort": true, 00:21:30.902 "nvme_admin": false, 00:21:30.902 "nvme_io": false 00:21:30.902 }, 00:21:30.902 "memory_domains": [ 00:21:30.902 { 00:21:30.902 "dma_device_id": "system", 00:21:30.902 "dma_device_type": 1 00:21:30.902 }, 00:21:30.902 { 00:21:30.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.902 "dma_device_type": 2 00:21:30.902 } 00:21:30.902 ], 00:21:30.902 "driver_specific": {} 00:21:30.902 } 00:21:30.902 ] 00:21:30.902 00:35:24 -- common/autotest_common.sh@893 -- # return 0 00:21:30.902 00:35:24 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:30.902 [2024-04-24 00:35:24.676971] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:30.902 [2024-04-24 00:35:24.679396] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:30.902 [2024-04-24 00:35:24.679594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:30.902 [2024-04-24 00:35:24.679688] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:30.902 [2024-04-24 00:35:24.679747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:30.902 [2024-04-24 00:35:24.679818] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:30.902 [2024-04-24 00:35:24.679864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:31.160 "name": "Existed_Raid", 00:21:31.160 "uuid": "b8261508-faff-4185-b735-74d31c9ebd21", 00:21:31.160 "strip_size_kb": 64, 00:21:31.160 "state": "configuring", 00:21:31.160 "raid_level": "concat", 00:21:31.160 "superblock": true, 00:21:31.160 "num_base_bdevs": 4, 00:21:31.160 "num_base_bdevs_discovered": 1, 00:21:31.160 "num_base_bdevs_operational": 4, 00:21:31.160 "base_bdevs_list": [ 00:21:31.160 { 00:21:31.160 "name": "BaseBdev1", 00:21:31.160 "uuid": "c5a92749-28e5-4e88-9520-6f6a7c32ca1b", 00:21:31.160 "is_configured": true, 00:21:31.160 "data_offset": 2048, 00:21:31.160 "data_size": 63488 00:21:31.160 }, 00:21:31.160 { 00:21:31.160 "name": "BaseBdev2", 00:21:31.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.160 "is_configured": false, 00:21:31.160 "data_offset": 0, 00:21:31.160 "data_size": 0 00:21:31.160 }, 00:21:31.160 { 00:21:31.160 "name": "BaseBdev3", 00:21:31.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.160 "is_configured": false, 00:21:31.160 "data_offset": 0, 00:21:31.160 "data_size": 0 00:21:31.160 }, 00:21:31.160 { 00:21:31.160 "name": "BaseBdev4", 00:21:31.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.160 "is_configured": false, 00:21:31.160 "data_offset": 0, 00:21:31.160 "data_size": 0 00:21:31.160 } 00:21:31.160 ] 00:21:31.160 }' 00:21:31.160 00:35:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:31.160 00:35:24 -- common/autotest_common.sh@10 -- # set +x 00:21:31.725 00:35:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:31.982 [2024-04-24 00:35:25.734756] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:31.982 BaseBdev2 00:21:31.982 00:35:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:31.982 00:35:25 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:21:31.982 00:35:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:31.982 00:35:25 -- common/autotest_common.sh@887 -- # local i 00:21:31.982 00:35:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:31.982 00:35:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:31.983 00:35:25 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:32.240 00:35:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:32.497 [ 00:21:32.497 { 00:21:32.497 "name": "BaseBdev2", 00:21:32.497 "aliases": [ 00:21:32.497 "cb2a9c36-1526-4a74-be2c-15ccb599249e" 00:21:32.497 ], 00:21:32.497 "product_name": "Malloc disk", 00:21:32.497 "block_size": 512, 00:21:32.497 "num_blocks": 65536, 00:21:32.497 "uuid": "cb2a9c36-1526-4a74-be2c-15ccb599249e", 00:21:32.497 "assigned_rate_limits": { 00:21:32.497 "rw_ios_per_sec": 0, 00:21:32.497 "rw_mbytes_per_sec": 0, 00:21:32.497 "r_mbytes_per_sec": 0, 00:21:32.497 "w_mbytes_per_sec": 0 00:21:32.497 }, 00:21:32.497 "claimed": true, 00:21:32.497 "claim_type": "exclusive_write", 00:21:32.497 "zoned": false, 00:21:32.497 "supported_io_types": { 00:21:32.497 "read": true, 00:21:32.497 "write": true, 00:21:32.497 "unmap": true, 00:21:32.497 "write_zeroes": true, 00:21:32.497 "flush": true, 00:21:32.497 "reset": true, 00:21:32.497 "compare": false, 00:21:32.497 "compare_and_write": false, 00:21:32.497 "abort": true, 00:21:32.497 "nvme_admin": false, 00:21:32.497 "nvme_io": false 00:21:32.497 }, 00:21:32.497 "memory_domains": [ 00:21:32.497 { 00:21:32.497 "dma_device_id": "system", 00:21:32.497 "dma_device_type": 1 00:21:32.497 }, 00:21:32.497 { 00:21:32.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.497 "dma_device_type": 2 00:21:32.497 } 00:21:32.497 ], 00:21:32.497 "driver_specific": {} 00:21:32.497 } 00:21:32.497 ] 00:21:32.497 00:35:26 -- common/autotest_common.sh@893 -- # return 0 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.497 00:35:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.755 00:35:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:32.755 "name": "Existed_Raid", 00:21:32.755 "uuid": "b8261508-faff-4185-b735-74d31c9ebd21", 00:21:32.755 "strip_size_kb": 64, 00:21:32.755 "state": "configuring", 00:21:32.755 "raid_level": "concat", 00:21:32.755 "superblock": true, 00:21:32.755 "num_base_bdevs": 4, 00:21:32.755 "num_base_bdevs_discovered": 2, 00:21:32.755 "num_base_bdevs_operational": 4, 00:21:32.755 "base_bdevs_list": [ 00:21:32.755 { 00:21:32.755 "name": "BaseBdev1", 00:21:32.755 "uuid": "c5a92749-28e5-4e88-9520-6f6a7c32ca1b", 00:21:32.755 "is_configured": true, 00:21:32.755 "data_offset": 2048, 00:21:32.755 "data_size": 63488 00:21:32.755 }, 00:21:32.755 { 00:21:32.755 "name": "BaseBdev2", 00:21:32.755 "uuid": "cb2a9c36-1526-4a74-be2c-15ccb599249e", 00:21:32.755 "is_configured": true, 00:21:32.755 "data_offset": 2048, 00:21:32.755 "data_size": 63488 00:21:32.755 }, 00:21:32.755 { 00:21:32.755 "name": "BaseBdev3", 00:21:32.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.755 "is_configured": false, 00:21:32.755 "data_offset": 0, 00:21:32.755 "data_size": 0 00:21:32.755 }, 00:21:32.755 { 00:21:32.755 "name": "BaseBdev4", 00:21:32.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.755 "is_configured": false, 00:21:32.755 "data_offset": 0, 00:21:32.755 "data_size": 0 00:21:32.755 } 00:21:32.755 ] 00:21:32.755 }' 00:21:32.755 00:35:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:32.755 00:35:26 -- common/autotest_common.sh@10 -- # set +x 00:21:33.318 00:35:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:33.575 [2024-04-24 00:35:27.284518] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:33.575 BaseBdev3 00:21:33.575 00:35:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:33.575 00:35:27 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:21:33.575 00:35:27 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:33.575 00:35:27 -- common/autotest_common.sh@887 -- # local i 00:21:33.575 00:35:27 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:33.575 00:35:27 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:33.575 00:35:27 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:33.832 00:35:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:34.111 [ 00:21:34.111 { 00:21:34.111 "name": "BaseBdev3", 00:21:34.111 "aliases": [ 00:21:34.111 "5ff6f9ac-c7d5-4894-9d89-8ab3bf764f2e" 00:21:34.111 ], 00:21:34.111 "product_name": "Malloc disk", 00:21:34.111 "block_size": 512, 00:21:34.111 "num_blocks": 65536, 00:21:34.111 "uuid": "5ff6f9ac-c7d5-4894-9d89-8ab3bf764f2e", 00:21:34.111 "assigned_rate_limits": { 00:21:34.111 "rw_ios_per_sec": 0, 00:21:34.111 "rw_mbytes_per_sec": 0, 00:21:34.111 "r_mbytes_per_sec": 0, 00:21:34.111 "w_mbytes_per_sec": 0 00:21:34.111 }, 00:21:34.111 "claimed": true, 00:21:34.111 "claim_type": "exclusive_write", 00:21:34.111 "zoned": false, 00:21:34.111 "supported_io_types": { 00:21:34.111 "read": true, 00:21:34.111 "write": true, 00:21:34.111 "unmap": true, 00:21:34.111 "write_zeroes": true, 00:21:34.111 "flush": true, 00:21:34.111 "reset": true, 00:21:34.111 "compare": false, 00:21:34.111 "compare_and_write": false, 00:21:34.111 "abort": true, 00:21:34.111 "nvme_admin": false, 00:21:34.111 "nvme_io": false 00:21:34.111 }, 00:21:34.111 "memory_domains": [ 00:21:34.111 { 00:21:34.111 "dma_device_id": "system", 00:21:34.111 "dma_device_type": 1 00:21:34.111 }, 00:21:34.111 { 00:21:34.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.111 "dma_device_type": 2 00:21:34.111 } 00:21:34.111 ], 00:21:34.111 "driver_specific": {} 00:21:34.111 } 00:21:34.111 ] 00:21:34.111 00:35:27 -- common/autotest_common.sh@893 -- # return 0 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.111 00:35:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.369 00:35:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:34.369 "name": "Existed_Raid", 00:21:34.369 "uuid": "b8261508-faff-4185-b735-74d31c9ebd21", 00:21:34.369 "strip_size_kb": 64, 00:21:34.369 "state": "configuring", 00:21:34.369 "raid_level": "concat", 00:21:34.369 "superblock": true, 00:21:34.369 "num_base_bdevs": 4, 00:21:34.369 "num_base_bdevs_discovered": 3, 00:21:34.369 "num_base_bdevs_operational": 4, 00:21:34.369 "base_bdevs_list": [ 00:21:34.369 { 00:21:34.369 "name": "BaseBdev1", 00:21:34.369 "uuid": "c5a92749-28e5-4e88-9520-6f6a7c32ca1b", 00:21:34.369 "is_configured": true, 00:21:34.369 "data_offset": 2048, 00:21:34.369 "data_size": 63488 00:21:34.369 }, 00:21:34.369 { 00:21:34.369 "name": "BaseBdev2", 00:21:34.369 "uuid": "cb2a9c36-1526-4a74-be2c-15ccb599249e", 00:21:34.369 "is_configured": true, 00:21:34.369 "data_offset": 2048, 00:21:34.369 "data_size": 63488 00:21:34.369 }, 00:21:34.369 { 00:21:34.369 "name": "BaseBdev3", 00:21:34.369 "uuid": "5ff6f9ac-c7d5-4894-9d89-8ab3bf764f2e", 00:21:34.369 "is_configured": true, 00:21:34.369 "data_offset": 2048, 00:21:34.369 "data_size": 63488 00:21:34.369 }, 00:21:34.369 { 00:21:34.369 "name": "BaseBdev4", 00:21:34.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.369 "is_configured": false, 00:21:34.369 "data_offset": 0, 00:21:34.369 "data_size": 0 00:21:34.369 } 00:21:34.369 ] 00:21:34.369 }' 00:21:34.369 00:35:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:34.369 00:35:28 -- common/autotest_common.sh@10 -- # set +x 00:21:35.304 00:35:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:35.304 [2024-04-24 00:35:29.045410] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:35.304 [2024-04-24 00:35:29.045916] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:21:35.304 [2024-04-24 00:35:29.046042] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:35.304 [2024-04-24 00:35:29.046225] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:35.304 BaseBdev4 00:21:35.304 [2024-04-24 00:35:29.046669] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:21:35.304 [2024-04-24 00:35:29.046689] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:21:35.304 [2024-04-24 00:35:29.046858] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.304 00:35:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:21:35.304 00:35:29 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:21:35.304 00:35:29 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:35.304 00:35:29 -- common/autotest_common.sh@887 -- # local i 00:21:35.304 00:35:29 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:35.304 00:35:29 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:35.304 00:35:29 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:35.562 00:35:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:35.820 [ 00:21:35.820 { 00:21:35.820 "name": "BaseBdev4", 00:21:35.820 "aliases": [ 00:21:35.820 "b32a7807-482e-4396-8d3f-3513a16a8b9f" 00:21:35.820 ], 00:21:35.820 "product_name": "Malloc disk", 00:21:35.820 "block_size": 512, 00:21:35.820 "num_blocks": 65536, 00:21:35.820 "uuid": "b32a7807-482e-4396-8d3f-3513a16a8b9f", 00:21:35.820 "assigned_rate_limits": { 00:21:35.820 "rw_ios_per_sec": 0, 00:21:35.820 "rw_mbytes_per_sec": 0, 00:21:35.820 "r_mbytes_per_sec": 0, 00:21:35.820 "w_mbytes_per_sec": 0 00:21:35.820 }, 00:21:35.820 "claimed": true, 00:21:35.820 "claim_type": "exclusive_write", 00:21:35.820 "zoned": false, 00:21:35.820 "supported_io_types": { 00:21:35.820 "read": true, 00:21:35.820 "write": true, 00:21:35.820 "unmap": true, 00:21:35.820 "write_zeroes": true, 00:21:35.820 "flush": true, 00:21:35.820 "reset": true, 00:21:35.820 "compare": false, 00:21:35.820 "compare_and_write": false, 00:21:35.820 "abort": true, 00:21:35.820 "nvme_admin": false, 00:21:35.820 "nvme_io": false 00:21:35.820 }, 00:21:35.820 "memory_domains": [ 00:21:35.820 { 00:21:35.820 "dma_device_id": "system", 00:21:35.820 "dma_device_type": 1 00:21:35.820 }, 00:21:35.820 { 00:21:35.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.820 "dma_device_type": 2 00:21:35.820 } 00:21:35.820 ], 00:21:35.820 "driver_specific": {} 00:21:35.820 } 00:21:35.820 ] 00:21:35.820 00:35:29 -- common/autotest_common.sh@893 -- # return 0 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.820 00:35:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.109 00:35:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:36.109 "name": "Existed_Raid", 00:21:36.109 "uuid": "b8261508-faff-4185-b735-74d31c9ebd21", 00:21:36.109 "strip_size_kb": 64, 00:21:36.109 "state": "online", 00:21:36.109 "raid_level": "concat", 00:21:36.109 "superblock": true, 00:21:36.109 "num_base_bdevs": 4, 00:21:36.109 "num_base_bdevs_discovered": 4, 00:21:36.109 "num_base_bdevs_operational": 4, 00:21:36.109 "base_bdevs_list": [ 00:21:36.109 { 00:21:36.109 "name": "BaseBdev1", 00:21:36.109 "uuid": "c5a92749-28e5-4e88-9520-6f6a7c32ca1b", 00:21:36.109 "is_configured": true, 00:21:36.109 "data_offset": 2048, 00:21:36.109 "data_size": 63488 00:21:36.109 }, 00:21:36.109 { 00:21:36.109 "name": "BaseBdev2", 00:21:36.109 "uuid": "cb2a9c36-1526-4a74-be2c-15ccb599249e", 00:21:36.109 "is_configured": true, 00:21:36.109 "data_offset": 2048, 00:21:36.109 "data_size": 63488 00:21:36.109 }, 00:21:36.109 { 00:21:36.109 "name": "BaseBdev3", 00:21:36.109 "uuid": "5ff6f9ac-c7d5-4894-9d89-8ab3bf764f2e", 00:21:36.109 "is_configured": true, 00:21:36.109 "data_offset": 2048, 00:21:36.109 "data_size": 63488 00:21:36.109 }, 00:21:36.109 { 00:21:36.109 "name": "BaseBdev4", 00:21:36.109 "uuid": "b32a7807-482e-4396-8d3f-3513a16a8b9f", 00:21:36.109 "is_configured": true, 00:21:36.109 "data_offset": 2048, 00:21:36.109 "data_size": 63488 00:21:36.109 } 00:21:36.109 ] 00:21:36.109 }' 00:21:36.109 00:35:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:36.109 00:35:29 -- common/autotest_common.sh@10 -- # set +x 00:21:36.673 00:35:30 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:36.929 [2024-04-24 00:35:30.665958] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:36.929 [2024-04-24 00:35:30.666220] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:36.929 [2024-04-24 00:35:30.666367] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.187 00:35:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.445 00:35:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:37.445 "name": "Existed_Raid", 00:21:37.445 "uuid": "b8261508-faff-4185-b735-74d31c9ebd21", 00:21:37.445 "strip_size_kb": 64, 00:21:37.445 "state": "offline", 00:21:37.445 "raid_level": "concat", 00:21:37.445 "superblock": true, 00:21:37.445 "num_base_bdevs": 4, 00:21:37.445 "num_base_bdevs_discovered": 3, 00:21:37.445 "num_base_bdevs_operational": 3, 00:21:37.445 "base_bdevs_list": [ 00:21:37.445 { 00:21:37.445 "name": null, 00:21:37.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.445 "is_configured": false, 00:21:37.445 "data_offset": 2048, 00:21:37.445 "data_size": 63488 00:21:37.445 }, 00:21:37.445 { 00:21:37.445 "name": "BaseBdev2", 00:21:37.445 "uuid": "cb2a9c36-1526-4a74-be2c-15ccb599249e", 00:21:37.445 "is_configured": true, 00:21:37.445 "data_offset": 2048, 00:21:37.445 "data_size": 63488 00:21:37.445 }, 00:21:37.445 { 00:21:37.445 "name": "BaseBdev3", 00:21:37.445 "uuid": "5ff6f9ac-c7d5-4894-9d89-8ab3bf764f2e", 00:21:37.445 "is_configured": true, 00:21:37.445 "data_offset": 2048, 00:21:37.445 "data_size": 63488 00:21:37.445 }, 00:21:37.445 { 00:21:37.445 "name": "BaseBdev4", 00:21:37.445 "uuid": "b32a7807-482e-4396-8d3f-3513a16a8b9f", 00:21:37.445 "is_configured": true, 00:21:37.445 "data_offset": 2048, 00:21:37.445 "data_size": 63488 00:21:37.445 } 00:21:37.445 ] 00:21:37.445 }' 00:21:37.445 00:35:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:37.445 00:35:31 -- common/autotest_common.sh@10 -- # set +x 00:21:38.008 00:35:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:38.008 00:35:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:38.008 00:35:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.008 00:35:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:38.008 00:35:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:38.008 00:35:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:38.008 00:35:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:38.265 [2024-04-24 00:35:31.986760] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:38.522 00:35:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:38.522 00:35:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:38.522 00:35:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:38.522 00:35:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.779 00:35:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:38.779 00:35:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:38.779 00:35:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:39.042 [2024-04-24 00:35:32.631081] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:39.043 00:35:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:39.043 00:35:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:39.043 00:35:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.043 00:35:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:39.301 00:35:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:39.301 00:35:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:39.301 00:35:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:39.560 [2024-04-24 00:35:33.275463] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:39.560 [2024-04-24 00:35:33.275686] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:21:39.863 00:35:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:39.863 00:35:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:39.863 00:35:33 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.863 00:35:33 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:39.863 00:35:33 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:39.863 00:35:33 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:39.863 00:35:33 -- bdev/bdev_raid.sh@287 -- # killprocess 128749 00:21:39.863 00:35:33 -- common/autotest_common.sh@936 -- # '[' -z 128749 ']' 00:21:39.863 00:35:33 -- common/autotest_common.sh@940 -- # kill -0 128749 00:21:39.863 00:35:33 -- common/autotest_common.sh@941 -- # uname 00:21:39.863 00:35:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:39.863 00:35:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128749 00:21:39.863 killing process with pid 128749 00:21:39.863 00:35:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:39.863 00:35:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:39.863 00:35:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128749' 00:21:39.863 00:35:33 -- common/autotest_common.sh@955 -- # kill 128749 00:21:39.863 [2024-04-24 00:35:33.646941] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:39.863 00:35:33 -- common/autotest_common.sh@960 -- # wait 128749 00:21:39.863 [2024-04-24 00:35:33.647085] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:41.765 ************************************ 00:21:41.765 END TEST raid_state_function_test_sb 00:21:41.765 ************************************ 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:41.765 00:21:41.765 real 0m16.648s 00:21:41.765 user 0m28.667s 00:21:41.765 sys 0m2.297s 00:21:41.765 00:35:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:41.765 00:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:21:41.765 00:35:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:41.765 00:35:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:41.765 00:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:41.765 ************************************ 00:21:41.765 START TEST raid_superblock_test 00:21:41.765 ************************************ 00:21:41.765 00:35:35 -- common/autotest_common.sh@1111 -- # raid_superblock_test concat 4 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@357 -- # raid_pid=129222 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@358 -- # waitforlisten 129222 /var/tmp/spdk-raid.sock 00:21:41.765 00:35:35 -- common/autotest_common.sh@817 -- # '[' -z 129222 ']' 00:21:41.765 00:35:35 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:41.765 00:35:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:41.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:41.765 00:35:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:41.765 00:35:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:41.765 00:35:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:41.765 00:35:35 -- common/autotest_common.sh@10 -- # set +x 00:21:41.765 [2024-04-24 00:35:35.241968] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:21:41.765 [2024-04-24 00:35:35.242380] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129222 ] 00:21:41.765 [2024-04-24 00:35:35.406514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.038 [2024-04-24 00:35:35.698038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.296 [2024-04-24 00:35:35.971567] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.555 00:35:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:42.555 00:35:36 -- common/autotest_common.sh@850 -- # return 0 00:21:42.555 00:35:36 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:21:42.555 00:35:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:42.555 00:35:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:21:42.555 00:35:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:21:42.555 00:35:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:42.555 00:35:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:42.555 00:35:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:42.555 00:35:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:42.555 00:35:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:42.812 malloc1 00:21:42.812 00:35:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:43.070 [2024-04-24 00:35:36.699628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:43.070 [2024-04-24 00:35:36.699995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.070 [2024-04-24 00:35:36.700070] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:21:43.070 [2024-04-24 00:35:36.700226] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.070 [2024-04-24 00:35:36.702989] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.070 [2024-04-24 00:35:36.703207] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:43.070 pt1 00:21:43.070 00:35:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:43.070 00:35:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:43.070 00:35:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:21:43.070 00:35:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:21:43.070 00:35:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:43.070 00:35:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:43.070 00:35:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:43.070 00:35:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:43.070 00:35:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:43.328 malloc2 00:21:43.328 00:35:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:43.586 [2024-04-24 00:35:37.206546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:43.586 [2024-04-24 00:35:37.206838] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.586 [2024-04-24 00:35:37.206940] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:43.586 [2024-04-24 00:35:37.207088] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.586 [2024-04-24 00:35:37.209750] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.586 [2024-04-24 00:35:37.209950] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:43.586 pt2 00:21:43.586 00:35:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:43.586 00:35:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:43.586 00:35:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:21:43.586 00:35:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:21:43.586 00:35:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:43.586 00:35:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:43.586 00:35:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:43.586 00:35:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:43.586 00:35:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:43.865 malloc3 00:21:43.865 00:35:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:44.123 [2024-04-24 00:35:37.780249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:44.123 [2024-04-24 00:35:37.780565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.123 [2024-04-24 00:35:37.780654] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:21:44.123 [2024-04-24 00:35:37.780835] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.123 [2024-04-24 00:35:37.783585] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.123 [2024-04-24 00:35:37.783776] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:44.123 pt3 00:21:44.123 00:35:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:44.123 00:35:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:44.123 00:35:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:21:44.123 00:35:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:21:44.123 00:35:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:44.123 00:35:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:44.123 00:35:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:44.123 00:35:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:44.123 00:35:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:44.380 malloc4 00:21:44.381 00:35:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:44.639 [2024-04-24 00:35:38.341617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:44.639 [2024-04-24 00:35:38.341961] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.639 [2024-04-24 00:35:38.342039] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:44.639 [2024-04-24 00:35:38.342215] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.639 [2024-04-24 00:35:38.344886] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.639 [2024-04-24 00:35:38.345074] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:44.639 pt4 00:21:44.639 00:35:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:44.639 00:35:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:44.639 00:35:38 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:44.897 [2024-04-24 00:35:38.565886] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:44.897 [2024-04-24 00:35:38.568349] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:44.897 [2024-04-24 00:35:38.568590] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:44.897 [2024-04-24 00:35:38.568806] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:44.897 [2024-04-24 00:35:38.569116] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:21:44.897 [2024-04-24 00:35:38.569227] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:44.897 [2024-04-24 00:35:38.569459] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:44.897 [2024-04-24 00:35:38.569954] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:21:44.897 [2024-04-24 00:35:38.570066] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:21:44.897 [2024-04-24 00:35:38.570369] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.897 00:35:38 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:44.897 00:35:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:44.897 00:35:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:44.897 00:35:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:44.897 00:35:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:44.897 00:35:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:44.897 00:35:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:44.897 00:35:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:44.897 00:35:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:44.897 00:35:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:44.897 00:35:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.897 00:35:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.155 00:35:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.155 "name": "raid_bdev1", 00:21:45.155 "uuid": "5fd094cb-bfac-49d6-b1f0-41cff0e8521f", 00:21:45.155 "strip_size_kb": 64, 00:21:45.155 "state": "online", 00:21:45.155 "raid_level": "concat", 00:21:45.155 "superblock": true, 00:21:45.155 "num_base_bdevs": 4, 00:21:45.155 "num_base_bdevs_discovered": 4, 00:21:45.155 "num_base_bdevs_operational": 4, 00:21:45.155 "base_bdevs_list": [ 00:21:45.155 { 00:21:45.155 "name": "pt1", 00:21:45.155 "uuid": "3f25973b-79f4-5a91-a2ed-6809c28890dc", 00:21:45.155 "is_configured": true, 00:21:45.155 "data_offset": 2048, 00:21:45.155 "data_size": 63488 00:21:45.155 }, 00:21:45.155 { 00:21:45.155 "name": "pt2", 00:21:45.155 "uuid": "3b8bcf25-dce2-564a-83b4-a38a1753f81a", 00:21:45.155 "is_configured": true, 00:21:45.155 "data_offset": 2048, 00:21:45.155 "data_size": 63488 00:21:45.155 }, 00:21:45.155 { 00:21:45.155 "name": "pt3", 00:21:45.155 "uuid": "5d784421-d50f-5699-9198-b2adb1cee61f", 00:21:45.155 "is_configured": true, 00:21:45.155 "data_offset": 2048, 00:21:45.155 "data_size": 63488 00:21:45.155 }, 00:21:45.155 { 00:21:45.155 "name": "pt4", 00:21:45.155 "uuid": "32875721-044f-5fee-aa69-2c0fbb9e94a5", 00:21:45.155 "is_configured": true, 00:21:45.155 "data_offset": 2048, 00:21:45.155 "data_size": 63488 00:21:45.155 } 00:21:45.155 ] 00:21:45.155 }' 00:21:45.155 00:35:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.155 00:35:38 -- common/autotest_common.sh@10 -- # set +x 00:21:45.721 00:35:39 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:45.721 00:35:39 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:21:45.999 [2024-04-24 00:35:39.642877] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:45.999 00:35:39 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5fd094cb-bfac-49d6-b1f0-41cff0e8521f 00:21:45.999 00:35:39 -- bdev/bdev_raid.sh@380 -- # '[' -z 5fd094cb-bfac-49d6-b1f0-41cff0e8521f ']' 00:21:45.999 00:35:39 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:46.258 [2024-04-24 00:35:39.962621] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.258 [2024-04-24 00:35:39.962859] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.258 [2024-04-24 00:35:39.963087] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.258 [2024-04-24 00:35:39.963258] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.258 [2024-04-24 00:35:39.963351] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:21:46.258 00:35:39 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.258 00:35:39 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:21:46.516 00:35:40 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:21:46.516 00:35:40 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:21:46.516 00:35:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:46.516 00:35:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:46.774 00:35:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:46.774 00:35:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:47.033 00:35:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:47.033 00:35:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:47.292 00:35:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:47.292 00:35:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:47.550 00:35:41 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:47.550 00:35:41 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:47.808 00:35:41 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:21:47.808 00:35:41 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:47.808 00:35:41 -- common/autotest_common.sh@638 -- # local es=0 00:21:47.808 00:35:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:47.808 00:35:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:47.808 00:35:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:47.808 00:35:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:47.808 00:35:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:47.808 00:35:41 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:47.808 00:35:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:47.808 00:35:41 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:47.808 00:35:41 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:47.808 00:35:41 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:21:48.066 [2024-04-24 00:35:41.742997] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:48.066 [2024-04-24 00:35:41.745408] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:48.066 [2024-04-24 00:35:41.745628] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:48.066 [2024-04-24 00:35:41.745786] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:21:48.066 [2024-04-24 00:35:41.745924] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:21:48.066 [2024-04-24 00:35:41.746079] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:21:48.066 [2024-04-24 00:35:41.746199] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:21:48.066 [2024-04-24 00:35:41.746343] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:21:48.066 [2024-04-24 00:35:41.746451] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:48.066 [2024-04-24 00:35:41.746492] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:21:48.066 request: 00:21:48.066 { 00:21:48.066 "name": "raid_bdev1", 00:21:48.066 "raid_level": "concat", 00:21:48.066 "base_bdevs": [ 00:21:48.066 "malloc1", 00:21:48.066 "malloc2", 00:21:48.066 "malloc3", 00:21:48.066 "malloc4" 00:21:48.066 ], 00:21:48.066 "superblock": false, 00:21:48.066 "strip_size_kb": 64, 00:21:48.066 "method": "bdev_raid_create", 00:21:48.066 "req_id": 1 00:21:48.066 } 00:21:48.066 Got JSON-RPC error response 00:21:48.066 response: 00:21:48.066 { 00:21:48.067 "code": -17, 00:21:48.067 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:48.067 } 00:21:48.067 00:35:41 -- common/autotest_common.sh@641 -- # es=1 00:21:48.067 00:35:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:48.067 00:35:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:48.067 00:35:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:48.067 00:35:41 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.067 00:35:41 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:21:48.324 00:35:41 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:21:48.324 00:35:41 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:21:48.324 00:35:41 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:48.582 [2024-04-24 00:35:42.279145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:48.582 [2024-04-24 00:35:42.279456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.582 [2024-04-24 00:35:42.279614] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:48.582 [2024-04-24 00:35:42.279725] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.582 [2024-04-24 00:35:42.282406] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.582 [2024-04-24 00:35:42.282630] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:48.582 [2024-04-24 00:35:42.282901] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:21:48.582 [2024-04-24 00:35:42.283091] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:48.582 pt1 00:21:48.582 00:35:42 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:48.582 00:35:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:48.582 00:35:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:48.582 00:35:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:48.582 00:35:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:48.582 00:35:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:48.582 00:35:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:48.582 00:35:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:48.582 00:35:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:48.582 00:35:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:48.582 00:35:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.582 00:35:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.840 00:35:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:48.840 "name": "raid_bdev1", 00:21:48.840 "uuid": "5fd094cb-bfac-49d6-b1f0-41cff0e8521f", 00:21:48.840 "strip_size_kb": 64, 00:21:48.840 "state": "configuring", 00:21:48.840 "raid_level": "concat", 00:21:48.840 "superblock": true, 00:21:48.840 "num_base_bdevs": 4, 00:21:48.840 "num_base_bdevs_discovered": 1, 00:21:48.840 "num_base_bdevs_operational": 4, 00:21:48.840 "base_bdevs_list": [ 00:21:48.840 { 00:21:48.840 "name": "pt1", 00:21:48.840 "uuid": "3f25973b-79f4-5a91-a2ed-6809c28890dc", 00:21:48.840 "is_configured": true, 00:21:48.840 "data_offset": 2048, 00:21:48.840 "data_size": 63488 00:21:48.840 }, 00:21:48.840 { 00:21:48.840 "name": null, 00:21:48.840 "uuid": "3b8bcf25-dce2-564a-83b4-a38a1753f81a", 00:21:48.840 "is_configured": false, 00:21:48.840 "data_offset": 2048, 00:21:48.840 "data_size": 63488 00:21:48.840 }, 00:21:48.840 { 00:21:48.840 "name": null, 00:21:48.840 "uuid": "5d784421-d50f-5699-9198-b2adb1cee61f", 00:21:48.840 "is_configured": false, 00:21:48.840 "data_offset": 2048, 00:21:48.840 "data_size": 63488 00:21:48.840 }, 00:21:48.840 { 00:21:48.840 "name": null, 00:21:48.840 "uuid": "32875721-044f-5fee-aa69-2c0fbb9e94a5", 00:21:48.840 "is_configured": false, 00:21:48.840 "data_offset": 2048, 00:21:48.840 "data_size": 63488 00:21:48.840 } 00:21:48.840 ] 00:21:48.840 }' 00:21:48.840 00:35:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:48.840 00:35:42 -- common/autotest_common.sh@10 -- # set +x 00:21:49.772 00:35:43 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:21:49.772 00:35:43 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:49.772 [2024-04-24 00:35:43.503679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:49.772 [2024-04-24 00:35:43.504045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.772 [2024-04-24 00:35:43.504131] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:49.772 [2024-04-24 00:35:43.504359] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.772 [2024-04-24 00:35:43.504909] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.772 [2024-04-24 00:35:43.505087] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:49.772 [2024-04-24 00:35:43.505311] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:49.772 [2024-04-24 00:35:43.505428] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:49.772 pt2 00:21:49.772 00:35:43 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:50.043 [2024-04-24 00:35:43.787836] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:50.043 00:35:43 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:21:50.043 00:35:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:50.043 00:35:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:50.043 00:35:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:50.043 00:35:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:50.043 00:35:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:50.043 00:35:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:50.043 00:35:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:50.043 00:35:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:50.043 00:35:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:50.043 00:35:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.043 00:35:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.337 00:35:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:50.337 "name": "raid_bdev1", 00:21:50.337 "uuid": "5fd094cb-bfac-49d6-b1f0-41cff0e8521f", 00:21:50.337 "strip_size_kb": 64, 00:21:50.337 "state": "configuring", 00:21:50.337 "raid_level": "concat", 00:21:50.337 "superblock": true, 00:21:50.337 "num_base_bdevs": 4, 00:21:50.337 "num_base_bdevs_discovered": 1, 00:21:50.337 "num_base_bdevs_operational": 4, 00:21:50.337 "base_bdevs_list": [ 00:21:50.337 { 00:21:50.337 "name": "pt1", 00:21:50.337 "uuid": "3f25973b-79f4-5a91-a2ed-6809c28890dc", 00:21:50.337 "is_configured": true, 00:21:50.337 "data_offset": 2048, 00:21:50.337 "data_size": 63488 00:21:50.337 }, 00:21:50.337 { 00:21:50.337 "name": null, 00:21:50.337 "uuid": "3b8bcf25-dce2-564a-83b4-a38a1753f81a", 00:21:50.337 "is_configured": false, 00:21:50.337 "data_offset": 2048, 00:21:50.337 "data_size": 63488 00:21:50.337 }, 00:21:50.337 { 00:21:50.337 "name": null, 00:21:50.337 "uuid": "5d784421-d50f-5699-9198-b2adb1cee61f", 00:21:50.337 "is_configured": false, 00:21:50.337 "data_offset": 2048, 00:21:50.337 "data_size": 63488 00:21:50.337 }, 00:21:50.337 { 00:21:50.337 "name": null, 00:21:50.337 "uuid": "32875721-044f-5fee-aa69-2c0fbb9e94a5", 00:21:50.337 "is_configured": false, 00:21:50.337 "data_offset": 2048, 00:21:50.337 "data_size": 63488 00:21:50.337 } 00:21:50.337 ] 00:21:50.337 }' 00:21:50.337 00:35:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:50.337 00:35:44 -- common/autotest_common.sh@10 -- # set +x 00:21:50.903 00:35:44 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:21:50.903 00:35:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:51.163 00:35:44 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:51.421 [2024-04-24 00:35:44.972044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:51.421 [2024-04-24 00:35:44.972349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.421 [2024-04-24 00:35:44.972430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:51.421 [2024-04-24 00:35:44.972541] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.421 [2024-04-24 00:35:44.973049] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.421 [2024-04-24 00:35:44.973225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:51.421 [2024-04-24 00:35:44.973442] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:51.421 [2024-04-24 00:35:44.973559] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:51.421 pt2 00:21:51.421 00:35:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:51.421 00:35:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:51.421 00:35:44 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:51.421 [2024-04-24 00:35:45.196062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:51.421 [2024-04-24 00:35:45.196402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.421 [2024-04-24 00:35:45.196477] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:51.421 [2024-04-24 00:35:45.196709] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.421 [2024-04-24 00:35:45.197292] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.421 [2024-04-24 00:35:45.197472] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:51.421 [2024-04-24 00:35:45.197691] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:51.421 [2024-04-24 00:35:45.197800] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:51.421 pt3 00:21:51.680 00:35:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:51.680 00:35:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:51.680 00:35:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:51.680 [2024-04-24 00:35:45.408149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:51.680 [2024-04-24 00:35:45.408472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.680 [2024-04-24 00:35:45.408554] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:51.680 [2024-04-24 00:35:45.408692] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.680 [2024-04-24 00:35:45.409188] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.680 [2024-04-24 00:35:45.409353] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:51.680 [2024-04-24 00:35:45.409598] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:21:51.680 [2024-04-24 00:35:45.409702] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:51.680 [2024-04-24 00:35:45.409891] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:21:51.680 [2024-04-24 00:35:45.409975] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:51.680 [2024-04-24 00:35:45.410178] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:51.680 [2024-04-24 00:35:45.410615] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:21:51.680 [2024-04-24 00:35:45.410666] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:21:51.680 [2024-04-24 00:35:45.410852] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.680 pt4 00:21:51.680 00:35:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:51.680 00:35:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:51.680 00:35:45 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:21:51.680 00:35:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:51.680 00:35:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:51.680 00:35:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:21:51.680 00:35:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:51.681 00:35:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:51.681 00:35:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:51.681 00:35:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:51.681 00:35:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:51.681 00:35:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:51.681 00:35:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.681 00:35:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.938 00:35:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:51.938 "name": "raid_bdev1", 00:21:51.938 "uuid": "5fd094cb-bfac-49d6-b1f0-41cff0e8521f", 00:21:51.938 "strip_size_kb": 64, 00:21:51.938 "state": "online", 00:21:51.938 "raid_level": "concat", 00:21:51.938 "superblock": true, 00:21:51.938 "num_base_bdevs": 4, 00:21:51.938 "num_base_bdevs_discovered": 4, 00:21:51.938 "num_base_bdevs_operational": 4, 00:21:51.938 "base_bdevs_list": [ 00:21:51.938 { 00:21:51.938 "name": "pt1", 00:21:51.938 "uuid": "3f25973b-79f4-5a91-a2ed-6809c28890dc", 00:21:51.938 "is_configured": true, 00:21:51.938 "data_offset": 2048, 00:21:51.938 "data_size": 63488 00:21:51.938 }, 00:21:51.938 { 00:21:51.938 "name": "pt2", 00:21:51.938 "uuid": "3b8bcf25-dce2-564a-83b4-a38a1753f81a", 00:21:51.938 "is_configured": true, 00:21:51.938 "data_offset": 2048, 00:21:51.938 "data_size": 63488 00:21:51.938 }, 00:21:51.938 { 00:21:51.938 "name": "pt3", 00:21:51.938 "uuid": "5d784421-d50f-5699-9198-b2adb1cee61f", 00:21:51.938 "is_configured": true, 00:21:51.938 "data_offset": 2048, 00:21:51.938 "data_size": 63488 00:21:51.938 }, 00:21:51.938 { 00:21:51.938 "name": "pt4", 00:21:51.938 "uuid": "32875721-044f-5fee-aa69-2c0fbb9e94a5", 00:21:51.938 "is_configured": true, 00:21:51.938 "data_offset": 2048, 00:21:51.938 "data_size": 63488 00:21:51.938 } 00:21:51.938 ] 00:21:51.938 }' 00:21:51.938 00:35:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:51.938 00:35:45 -- common/autotest_common.sh@10 -- # set +x 00:21:52.896 00:35:46 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:21:52.896 00:35:46 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:52.896 [2024-04-24 00:35:46.636739] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:52.896 00:35:46 -- bdev/bdev_raid.sh@430 -- # '[' 5fd094cb-bfac-49d6-b1f0-41cff0e8521f '!=' 5fd094cb-bfac-49d6-b1f0-41cff0e8521f ']' 00:21:52.896 00:35:46 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:21:52.896 00:35:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:52.896 00:35:46 -- bdev/bdev_raid.sh@197 -- # return 1 00:21:52.896 00:35:46 -- bdev/bdev_raid.sh@511 -- # killprocess 129222 00:21:52.896 00:35:46 -- common/autotest_common.sh@936 -- # '[' -z 129222 ']' 00:21:52.896 00:35:46 -- common/autotest_common.sh@940 -- # kill -0 129222 00:21:52.896 00:35:46 -- common/autotest_common.sh@941 -- # uname 00:21:52.896 00:35:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:52.896 00:35:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129222 00:21:52.896 00:35:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:52.896 00:35:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:52.896 00:35:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129222' 00:21:52.896 killing process with pid 129222 00:21:52.896 00:35:46 -- common/autotest_common.sh@955 -- # kill 129222 00:21:52.896 [2024-04-24 00:35:46.686425] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:52.896 00:35:46 -- common/autotest_common.sh@960 -- # wait 129222 00:21:52.896 [2024-04-24 00:35:46.686637] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.896 [2024-04-24 00:35:46.686797] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.896 [2024-04-24 00:35:46.686879] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:21:53.465 [2024-04-24 00:35:47.137241] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:54.846 ************************************ 00:21:54.846 END TEST raid_superblock_test 00:21:54.846 ************************************ 00:21:54.846 00:35:48 -- bdev/bdev_raid.sh@513 -- # return 0 00:21:54.846 00:21:54.846 real 0m13.379s 00:21:54.846 user 0m22.604s 00:21:54.846 sys 0m1.813s 00:21:54.846 00:35:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:54.846 00:35:48 -- common/autotest_common.sh@10 -- # set +x 00:21:54.846 00:35:48 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:21:54.846 00:35:48 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:21:54.846 00:35:48 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:54.846 00:35:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:54.846 00:35:48 -- common/autotest_common.sh@10 -- # set +x 00:21:55.106 ************************************ 00:21:55.106 START TEST raid_state_function_test 00:21:55.106 ************************************ 00:21:55.106 00:35:48 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 4 false 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@226 -- # raid_pid=129566 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:55.106 Process raid pid: 129566 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129566' 00:21:55.106 00:35:48 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129566 /var/tmp/spdk-raid.sock 00:21:55.106 00:35:48 -- common/autotest_common.sh@817 -- # '[' -z 129566 ']' 00:21:55.106 00:35:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:55.106 00:35:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:55.106 00:35:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:55.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:55.106 00:35:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:55.106 00:35:48 -- common/autotest_common.sh@10 -- # set +x 00:21:55.106 [2024-04-24 00:35:48.732032] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:21:55.106 [2024-04-24 00:35:48.732409] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.367 [2024-04-24 00:35:48.901855] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.627 [2024-04-24 00:35:49.187755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.887 [2024-04-24 00:35:49.435314] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:56.147 00:35:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:56.147 00:35:49 -- common/autotest_common.sh@850 -- # return 0 00:21:56.147 00:35:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:56.409 [2024-04-24 00:35:49.987717] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:56.409 [2024-04-24 00:35:49.988041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:56.409 [2024-04-24 00:35:49.988183] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:56.409 [2024-04-24 00:35:49.988297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:56.409 [2024-04-24 00:35:49.988411] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:56.409 [2024-04-24 00:35:49.988494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:56.409 [2024-04-24 00:35:49.988611] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:56.409 [2024-04-24 00:35:49.988671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:56.409 00:35:49 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:56.409 00:35:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:56.409 00:35:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:56.409 00:35:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:56.409 00:35:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:56.409 00:35:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:56.409 00:35:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:56.409 00:35:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:56.409 00:35:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:56.409 00:35:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:56.409 00:35:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.409 00:35:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.670 00:35:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:56.670 "name": "Existed_Raid", 00:21:56.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.670 "strip_size_kb": 0, 00:21:56.670 "state": "configuring", 00:21:56.670 "raid_level": "raid1", 00:21:56.670 "superblock": false, 00:21:56.670 "num_base_bdevs": 4, 00:21:56.670 "num_base_bdevs_discovered": 0, 00:21:56.670 "num_base_bdevs_operational": 4, 00:21:56.670 "base_bdevs_list": [ 00:21:56.670 { 00:21:56.670 "name": "BaseBdev1", 00:21:56.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.670 "is_configured": false, 00:21:56.670 "data_offset": 0, 00:21:56.670 "data_size": 0 00:21:56.670 }, 00:21:56.670 { 00:21:56.670 "name": "BaseBdev2", 00:21:56.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.670 "is_configured": false, 00:21:56.670 "data_offset": 0, 00:21:56.670 "data_size": 0 00:21:56.670 }, 00:21:56.670 { 00:21:56.670 "name": "BaseBdev3", 00:21:56.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.670 "is_configured": false, 00:21:56.670 "data_offset": 0, 00:21:56.670 "data_size": 0 00:21:56.670 }, 00:21:56.670 { 00:21:56.670 "name": "BaseBdev4", 00:21:56.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.670 "is_configured": false, 00:21:56.670 "data_offset": 0, 00:21:56.670 "data_size": 0 00:21:56.670 } 00:21:56.670 ] 00:21:56.670 }' 00:21:56.670 00:35:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:56.670 00:35:50 -- common/autotest_common.sh@10 -- # set +x 00:21:57.240 00:35:50 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:57.500 [2024-04-24 00:35:51.059778] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:57.500 [2024-04-24 00:35:51.060066] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:21:57.500 00:35:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:57.760 [2024-04-24 00:35:51.339843] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:57.760 [2024-04-24 00:35:51.340132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:57.760 [2024-04-24 00:35:51.340224] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:57.760 [2024-04-24 00:35:51.340343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:57.760 [2024-04-24 00:35:51.340425] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:57.760 [2024-04-24 00:35:51.340502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:57.760 [2024-04-24 00:35:51.340675] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:57.760 [2024-04-24 00:35:51.340736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:57.760 00:35:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:58.018 [2024-04-24 00:35:51.669684] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:58.018 BaseBdev1 00:21:58.018 00:35:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:58.018 00:35:51 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:21:58.018 00:35:51 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:58.018 00:35:51 -- common/autotest_common.sh@887 -- # local i 00:21:58.018 00:35:51 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:58.018 00:35:51 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:58.018 00:35:51 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:58.277 00:35:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:58.535 [ 00:21:58.535 { 00:21:58.535 "name": "BaseBdev1", 00:21:58.535 "aliases": [ 00:21:58.535 "dad5b7a3-a144-41aa-b221-805ce9e462f4" 00:21:58.535 ], 00:21:58.535 "product_name": "Malloc disk", 00:21:58.535 "block_size": 512, 00:21:58.535 "num_blocks": 65536, 00:21:58.535 "uuid": "dad5b7a3-a144-41aa-b221-805ce9e462f4", 00:21:58.535 "assigned_rate_limits": { 00:21:58.535 "rw_ios_per_sec": 0, 00:21:58.535 "rw_mbytes_per_sec": 0, 00:21:58.535 "r_mbytes_per_sec": 0, 00:21:58.535 "w_mbytes_per_sec": 0 00:21:58.535 }, 00:21:58.535 "claimed": true, 00:21:58.535 "claim_type": "exclusive_write", 00:21:58.535 "zoned": false, 00:21:58.535 "supported_io_types": { 00:21:58.535 "read": true, 00:21:58.535 "write": true, 00:21:58.535 "unmap": true, 00:21:58.535 "write_zeroes": true, 00:21:58.535 "flush": true, 00:21:58.535 "reset": true, 00:21:58.535 "compare": false, 00:21:58.535 "compare_and_write": false, 00:21:58.535 "abort": true, 00:21:58.535 "nvme_admin": false, 00:21:58.535 "nvme_io": false 00:21:58.535 }, 00:21:58.535 "memory_domains": [ 00:21:58.535 { 00:21:58.535 "dma_device_id": "system", 00:21:58.535 "dma_device_type": 1 00:21:58.535 }, 00:21:58.535 { 00:21:58.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.536 "dma_device_type": 2 00:21:58.536 } 00:21:58.536 ], 00:21:58.536 "driver_specific": {} 00:21:58.536 } 00:21:58.536 ] 00:21:58.536 00:35:52 -- common/autotest_common.sh@893 -- # return 0 00:21:58.536 00:35:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:58.536 00:35:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:58.536 00:35:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:58.536 00:35:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:58.536 00:35:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:58.536 00:35:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:58.536 00:35:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:58.536 00:35:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:58.536 00:35:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:58.536 00:35:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:58.536 00:35:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.536 00:35:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.848 00:35:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:58.848 "name": "Existed_Raid", 00:21:58.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.848 "strip_size_kb": 0, 00:21:58.848 "state": "configuring", 00:21:58.848 "raid_level": "raid1", 00:21:58.848 "superblock": false, 00:21:58.848 "num_base_bdevs": 4, 00:21:58.848 "num_base_bdevs_discovered": 1, 00:21:58.848 "num_base_bdevs_operational": 4, 00:21:58.848 "base_bdevs_list": [ 00:21:58.848 { 00:21:58.848 "name": "BaseBdev1", 00:21:58.848 "uuid": "dad5b7a3-a144-41aa-b221-805ce9e462f4", 00:21:58.848 "is_configured": true, 00:21:58.848 "data_offset": 0, 00:21:58.848 "data_size": 65536 00:21:58.848 }, 00:21:58.848 { 00:21:58.848 "name": "BaseBdev2", 00:21:58.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.848 "is_configured": false, 00:21:58.848 "data_offset": 0, 00:21:58.848 "data_size": 0 00:21:58.848 }, 00:21:58.848 { 00:21:58.848 "name": "BaseBdev3", 00:21:58.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.848 "is_configured": false, 00:21:58.848 "data_offset": 0, 00:21:58.848 "data_size": 0 00:21:58.848 }, 00:21:58.848 { 00:21:58.848 "name": "BaseBdev4", 00:21:58.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.848 "is_configured": false, 00:21:58.848 "data_offset": 0, 00:21:58.848 "data_size": 0 00:21:58.848 } 00:21:58.848 ] 00:21:58.848 }' 00:21:58.848 00:35:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:58.848 00:35:52 -- common/autotest_common.sh@10 -- # set +x 00:21:59.414 00:35:52 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:59.672 [2024-04-24 00:35:53.250086] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:59.672 [2024-04-24 00:35:53.250338] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:21:59.672 00:35:53 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:21:59.672 00:35:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:59.930 [2024-04-24 00:35:53.522181] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:59.930 [2024-04-24 00:35:53.524652] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:59.930 [2024-04-24 00:35:53.524877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:59.930 [2024-04-24 00:35:53.524985] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:59.930 [2024-04-24 00:35:53.525048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:59.930 [2024-04-24 00:35:53.525210] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:59.930 [2024-04-24 00:35:53.525267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.930 00:35:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.188 00:35:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:00.188 "name": "Existed_Raid", 00:22:00.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.188 "strip_size_kb": 0, 00:22:00.188 "state": "configuring", 00:22:00.188 "raid_level": "raid1", 00:22:00.188 "superblock": false, 00:22:00.188 "num_base_bdevs": 4, 00:22:00.188 "num_base_bdevs_discovered": 1, 00:22:00.188 "num_base_bdevs_operational": 4, 00:22:00.188 "base_bdevs_list": [ 00:22:00.188 { 00:22:00.188 "name": "BaseBdev1", 00:22:00.188 "uuid": "dad5b7a3-a144-41aa-b221-805ce9e462f4", 00:22:00.188 "is_configured": true, 00:22:00.188 "data_offset": 0, 00:22:00.188 "data_size": 65536 00:22:00.188 }, 00:22:00.188 { 00:22:00.188 "name": "BaseBdev2", 00:22:00.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.188 "is_configured": false, 00:22:00.188 "data_offset": 0, 00:22:00.188 "data_size": 0 00:22:00.188 }, 00:22:00.188 { 00:22:00.188 "name": "BaseBdev3", 00:22:00.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.188 "is_configured": false, 00:22:00.188 "data_offset": 0, 00:22:00.188 "data_size": 0 00:22:00.188 }, 00:22:00.188 { 00:22:00.188 "name": "BaseBdev4", 00:22:00.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.188 "is_configured": false, 00:22:00.188 "data_offset": 0, 00:22:00.188 "data_size": 0 00:22:00.188 } 00:22:00.188 ] 00:22:00.188 }' 00:22:00.188 00:35:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:00.188 00:35:53 -- common/autotest_common.sh@10 -- # set +x 00:22:00.754 00:35:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:01.050 [2024-04-24 00:35:54.680554] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:01.050 BaseBdev2 00:22:01.050 00:35:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:01.050 00:35:54 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:22:01.050 00:35:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:01.050 00:35:54 -- common/autotest_common.sh@887 -- # local i 00:22:01.050 00:35:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:01.050 00:35:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:01.050 00:35:54 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:01.308 00:35:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:01.565 [ 00:22:01.565 { 00:22:01.565 "name": "BaseBdev2", 00:22:01.565 "aliases": [ 00:22:01.565 "fe5ca62f-c7cb-422e-b26d-bd87127f55a9" 00:22:01.565 ], 00:22:01.565 "product_name": "Malloc disk", 00:22:01.565 "block_size": 512, 00:22:01.565 "num_blocks": 65536, 00:22:01.565 "uuid": "fe5ca62f-c7cb-422e-b26d-bd87127f55a9", 00:22:01.565 "assigned_rate_limits": { 00:22:01.565 "rw_ios_per_sec": 0, 00:22:01.565 "rw_mbytes_per_sec": 0, 00:22:01.565 "r_mbytes_per_sec": 0, 00:22:01.565 "w_mbytes_per_sec": 0 00:22:01.565 }, 00:22:01.565 "claimed": true, 00:22:01.565 "claim_type": "exclusive_write", 00:22:01.565 "zoned": false, 00:22:01.565 "supported_io_types": { 00:22:01.565 "read": true, 00:22:01.565 "write": true, 00:22:01.565 "unmap": true, 00:22:01.565 "write_zeroes": true, 00:22:01.565 "flush": true, 00:22:01.565 "reset": true, 00:22:01.565 "compare": false, 00:22:01.565 "compare_and_write": false, 00:22:01.565 "abort": true, 00:22:01.565 "nvme_admin": false, 00:22:01.565 "nvme_io": false 00:22:01.566 }, 00:22:01.566 "memory_domains": [ 00:22:01.566 { 00:22:01.566 "dma_device_id": "system", 00:22:01.566 "dma_device_type": 1 00:22:01.566 }, 00:22:01.566 { 00:22:01.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.566 "dma_device_type": 2 00:22:01.566 } 00:22:01.566 ], 00:22:01.566 "driver_specific": {} 00:22:01.566 } 00:22:01.566 ] 00:22:01.566 00:35:55 -- common/autotest_common.sh@893 -- # return 0 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.566 00:35:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.824 00:35:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:01.824 "name": "Existed_Raid", 00:22:01.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.824 "strip_size_kb": 0, 00:22:01.824 "state": "configuring", 00:22:01.824 "raid_level": "raid1", 00:22:01.824 "superblock": false, 00:22:01.824 "num_base_bdevs": 4, 00:22:01.824 "num_base_bdevs_discovered": 2, 00:22:01.824 "num_base_bdevs_operational": 4, 00:22:01.824 "base_bdevs_list": [ 00:22:01.824 { 00:22:01.824 "name": "BaseBdev1", 00:22:01.824 "uuid": "dad5b7a3-a144-41aa-b221-805ce9e462f4", 00:22:01.824 "is_configured": true, 00:22:01.824 "data_offset": 0, 00:22:01.824 "data_size": 65536 00:22:01.824 }, 00:22:01.824 { 00:22:01.824 "name": "BaseBdev2", 00:22:01.824 "uuid": "fe5ca62f-c7cb-422e-b26d-bd87127f55a9", 00:22:01.824 "is_configured": true, 00:22:01.824 "data_offset": 0, 00:22:01.824 "data_size": 65536 00:22:01.824 }, 00:22:01.824 { 00:22:01.824 "name": "BaseBdev3", 00:22:01.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.824 "is_configured": false, 00:22:01.824 "data_offset": 0, 00:22:01.824 "data_size": 0 00:22:01.824 }, 00:22:01.824 { 00:22:01.824 "name": "BaseBdev4", 00:22:01.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.824 "is_configured": false, 00:22:01.824 "data_offset": 0, 00:22:01.824 "data_size": 0 00:22:01.824 } 00:22:01.824 ] 00:22:01.824 }' 00:22:01.824 00:35:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:01.824 00:35:55 -- common/autotest_common.sh@10 -- # set +x 00:22:02.758 00:35:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:02.758 [2024-04-24 00:35:56.502963] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:02.758 BaseBdev3 00:22:02.758 00:35:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:02.758 00:35:56 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:22:02.758 00:35:56 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:02.758 00:35:56 -- common/autotest_common.sh@887 -- # local i 00:22:02.758 00:35:56 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:02.758 00:35:56 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:02.758 00:35:56 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:03.325 00:35:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:03.325 [ 00:22:03.325 { 00:22:03.325 "name": "BaseBdev3", 00:22:03.325 "aliases": [ 00:22:03.325 "79b04046-3a7d-415e-8992-a89d4545181a" 00:22:03.325 ], 00:22:03.325 "product_name": "Malloc disk", 00:22:03.325 "block_size": 512, 00:22:03.325 "num_blocks": 65536, 00:22:03.325 "uuid": "79b04046-3a7d-415e-8992-a89d4545181a", 00:22:03.325 "assigned_rate_limits": { 00:22:03.325 "rw_ios_per_sec": 0, 00:22:03.325 "rw_mbytes_per_sec": 0, 00:22:03.325 "r_mbytes_per_sec": 0, 00:22:03.325 "w_mbytes_per_sec": 0 00:22:03.325 }, 00:22:03.325 "claimed": true, 00:22:03.325 "claim_type": "exclusive_write", 00:22:03.325 "zoned": false, 00:22:03.325 "supported_io_types": { 00:22:03.325 "read": true, 00:22:03.325 "write": true, 00:22:03.325 "unmap": true, 00:22:03.325 "write_zeroes": true, 00:22:03.325 "flush": true, 00:22:03.325 "reset": true, 00:22:03.325 "compare": false, 00:22:03.325 "compare_and_write": false, 00:22:03.325 "abort": true, 00:22:03.325 "nvme_admin": false, 00:22:03.325 "nvme_io": false 00:22:03.325 }, 00:22:03.325 "memory_domains": [ 00:22:03.325 { 00:22:03.325 "dma_device_id": "system", 00:22:03.325 "dma_device_type": 1 00:22:03.325 }, 00:22:03.325 { 00:22:03.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.325 "dma_device_type": 2 00:22:03.325 } 00:22:03.325 ], 00:22:03.325 "driver_specific": {} 00:22:03.325 } 00:22:03.325 ] 00:22:03.583 00:35:57 -- common/autotest_common.sh@893 -- # return 0 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.583 00:35:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.842 00:35:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:03.842 "name": "Existed_Raid", 00:22:03.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.842 "strip_size_kb": 0, 00:22:03.842 "state": "configuring", 00:22:03.842 "raid_level": "raid1", 00:22:03.842 "superblock": false, 00:22:03.842 "num_base_bdevs": 4, 00:22:03.842 "num_base_bdevs_discovered": 3, 00:22:03.842 "num_base_bdevs_operational": 4, 00:22:03.842 "base_bdevs_list": [ 00:22:03.842 { 00:22:03.842 "name": "BaseBdev1", 00:22:03.842 "uuid": "dad5b7a3-a144-41aa-b221-805ce9e462f4", 00:22:03.842 "is_configured": true, 00:22:03.842 "data_offset": 0, 00:22:03.842 "data_size": 65536 00:22:03.842 }, 00:22:03.842 { 00:22:03.842 "name": "BaseBdev2", 00:22:03.842 "uuid": "fe5ca62f-c7cb-422e-b26d-bd87127f55a9", 00:22:03.842 "is_configured": true, 00:22:03.842 "data_offset": 0, 00:22:03.842 "data_size": 65536 00:22:03.842 }, 00:22:03.842 { 00:22:03.842 "name": "BaseBdev3", 00:22:03.842 "uuid": "79b04046-3a7d-415e-8992-a89d4545181a", 00:22:03.842 "is_configured": true, 00:22:03.842 "data_offset": 0, 00:22:03.842 "data_size": 65536 00:22:03.842 }, 00:22:03.842 { 00:22:03.842 "name": "BaseBdev4", 00:22:03.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.842 "is_configured": false, 00:22:03.842 "data_offset": 0, 00:22:03.842 "data_size": 0 00:22:03.842 } 00:22:03.842 ] 00:22:03.842 }' 00:22:03.842 00:35:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:03.842 00:35:57 -- common/autotest_common.sh@10 -- # set +x 00:22:04.408 00:35:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:04.972 [2024-04-24 00:35:58.486241] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:04.972 [2024-04-24 00:35:58.487291] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:22:04.972 [2024-04-24 00:35:58.487348] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:04.972 [2024-04-24 00:35:58.487607] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:22:04.972 [2024-04-24 00:35:58.488107] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:22:04.972 [2024-04-24 00:35:58.488228] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:22:04.972 [2024-04-24 00:35:58.488599] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:04.972 BaseBdev4 00:22:04.972 00:35:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:22:04.972 00:35:58 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:22:04.972 00:35:58 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:04.972 00:35:58 -- common/autotest_common.sh@887 -- # local i 00:22:04.972 00:35:58 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:04.972 00:35:58 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:04.972 00:35:58 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:04.972 00:35:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:05.230 [ 00:22:05.230 { 00:22:05.230 "name": "BaseBdev4", 00:22:05.230 "aliases": [ 00:22:05.230 "6bfffa42-a01a-4833-8c06-28020960a8c3" 00:22:05.230 ], 00:22:05.230 "product_name": "Malloc disk", 00:22:05.230 "block_size": 512, 00:22:05.230 "num_blocks": 65536, 00:22:05.230 "uuid": "6bfffa42-a01a-4833-8c06-28020960a8c3", 00:22:05.230 "assigned_rate_limits": { 00:22:05.230 "rw_ios_per_sec": 0, 00:22:05.230 "rw_mbytes_per_sec": 0, 00:22:05.230 "r_mbytes_per_sec": 0, 00:22:05.230 "w_mbytes_per_sec": 0 00:22:05.230 }, 00:22:05.230 "claimed": true, 00:22:05.230 "claim_type": "exclusive_write", 00:22:05.230 "zoned": false, 00:22:05.230 "supported_io_types": { 00:22:05.230 "read": true, 00:22:05.230 "write": true, 00:22:05.230 "unmap": true, 00:22:05.230 "write_zeroes": true, 00:22:05.230 "flush": true, 00:22:05.230 "reset": true, 00:22:05.230 "compare": false, 00:22:05.230 "compare_and_write": false, 00:22:05.230 "abort": true, 00:22:05.230 "nvme_admin": false, 00:22:05.230 "nvme_io": false 00:22:05.230 }, 00:22:05.230 "memory_domains": [ 00:22:05.230 { 00:22:05.230 "dma_device_id": "system", 00:22:05.230 "dma_device_type": 1 00:22:05.230 }, 00:22:05.230 { 00:22:05.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.230 "dma_device_type": 2 00:22:05.230 } 00:22:05.230 ], 00:22:05.230 "driver_specific": {} 00:22:05.230 } 00:22:05.230 ] 00:22:05.230 00:35:58 -- common/autotest_common.sh@893 -- # return 0 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.230 00:35:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.489 00:35:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.489 "name": "Existed_Raid", 00:22:05.489 "uuid": "3acdefc1-37a8-4874-ae7c-5f6adf4307e0", 00:22:05.489 "strip_size_kb": 0, 00:22:05.489 "state": "online", 00:22:05.489 "raid_level": "raid1", 00:22:05.489 "superblock": false, 00:22:05.489 "num_base_bdevs": 4, 00:22:05.489 "num_base_bdevs_discovered": 4, 00:22:05.489 "num_base_bdevs_operational": 4, 00:22:05.489 "base_bdevs_list": [ 00:22:05.489 { 00:22:05.489 "name": "BaseBdev1", 00:22:05.489 "uuid": "dad5b7a3-a144-41aa-b221-805ce9e462f4", 00:22:05.489 "is_configured": true, 00:22:05.489 "data_offset": 0, 00:22:05.489 "data_size": 65536 00:22:05.489 }, 00:22:05.489 { 00:22:05.489 "name": "BaseBdev2", 00:22:05.489 "uuid": "fe5ca62f-c7cb-422e-b26d-bd87127f55a9", 00:22:05.489 "is_configured": true, 00:22:05.489 "data_offset": 0, 00:22:05.489 "data_size": 65536 00:22:05.489 }, 00:22:05.489 { 00:22:05.489 "name": "BaseBdev3", 00:22:05.489 "uuid": "79b04046-3a7d-415e-8992-a89d4545181a", 00:22:05.489 "is_configured": true, 00:22:05.489 "data_offset": 0, 00:22:05.489 "data_size": 65536 00:22:05.489 }, 00:22:05.489 { 00:22:05.489 "name": "BaseBdev4", 00:22:05.489 "uuid": "6bfffa42-a01a-4833-8c06-28020960a8c3", 00:22:05.489 "is_configured": true, 00:22:05.489 "data_offset": 0, 00:22:05.489 "data_size": 65536 00:22:05.489 } 00:22:05.489 ] 00:22:05.489 }' 00:22:05.489 00:35:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.489 00:35:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.055 00:35:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:06.313 [2024-04-24 00:35:59.990710] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.572 00:36:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.830 00:36:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:06.830 "name": "Existed_Raid", 00:22:06.830 "uuid": "3acdefc1-37a8-4874-ae7c-5f6adf4307e0", 00:22:06.830 "strip_size_kb": 0, 00:22:06.830 "state": "online", 00:22:06.830 "raid_level": "raid1", 00:22:06.830 "superblock": false, 00:22:06.830 "num_base_bdevs": 4, 00:22:06.830 "num_base_bdevs_discovered": 3, 00:22:06.830 "num_base_bdevs_operational": 3, 00:22:06.830 "base_bdevs_list": [ 00:22:06.830 { 00:22:06.830 "name": null, 00:22:06.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.830 "is_configured": false, 00:22:06.830 "data_offset": 0, 00:22:06.830 "data_size": 65536 00:22:06.830 }, 00:22:06.830 { 00:22:06.830 "name": "BaseBdev2", 00:22:06.830 "uuid": "fe5ca62f-c7cb-422e-b26d-bd87127f55a9", 00:22:06.830 "is_configured": true, 00:22:06.830 "data_offset": 0, 00:22:06.830 "data_size": 65536 00:22:06.830 }, 00:22:06.830 { 00:22:06.830 "name": "BaseBdev3", 00:22:06.830 "uuid": "79b04046-3a7d-415e-8992-a89d4545181a", 00:22:06.830 "is_configured": true, 00:22:06.830 "data_offset": 0, 00:22:06.830 "data_size": 65536 00:22:06.830 }, 00:22:06.830 { 00:22:06.830 "name": "BaseBdev4", 00:22:06.830 "uuid": "6bfffa42-a01a-4833-8c06-28020960a8c3", 00:22:06.830 "is_configured": true, 00:22:06.830 "data_offset": 0, 00:22:06.830 "data_size": 65536 00:22:06.830 } 00:22:06.830 ] 00:22:06.830 }' 00:22:06.830 00:36:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:06.830 00:36:00 -- common/autotest_common.sh@10 -- # set +x 00:22:07.396 00:36:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:07.396 00:36:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:07.396 00:36:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:07.396 00:36:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.654 00:36:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:07.654 00:36:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:07.654 00:36:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:07.912 [2024-04-24 00:36:01.566749] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:07.912 00:36:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:07.912 00:36:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:08.170 00:36:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.170 00:36:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:08.170 00:36:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:08.170 00:36:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.170 00:36:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:08.738 [2024-04-24 00:36:02.245944] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:08.738 00:36:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:08.738 00:36:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:08.738 00:36:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:08.738 00:36:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.996 00:36:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:08.996 00:36:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.996 00:36:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:09.256 [2024-04-24 00:36:02.864344] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:09.256 [2024-04-24 00:36:02.864894] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:09.257 [2024-04-24 00:36:02.975990] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:09.257 [2024-04-24 00:36:02.976349] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:09.257 [2024-04-24 00:36:02.976456] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:22:09.257 00:36:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:09.257 00:36:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:09.257 00:36:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.257 00:36:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:09.529 00:36:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:09.529 00:36:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:09.529 00:36:03 -- bdev/bdev_raid.sh@287 -- # killprocess 129566 00:22:09.529 00:36:03 -- common/autotest_common.sh@936 -- # '[' -z 129566 ']' 00:22:09.529 00:36:03 -- common/autotest_common.sh@940 -- # kill -0 129566 00:22:09.530 00:36:03 -- common/autotest_common.sh@941 -- # uname 00:22:09.530 00:36:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:09.530 00:36:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129566 00:22:09.530 00:36:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:09.530 00:36:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:09.530 00:36:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129566' 00:22:09.530 killing process with pid 129566 00:22:09.530 00:36:03 -- common/autotest_common.sh@955 -- # kill 129566 00:22:09.530 [2024-04-24 00:36:03.281381] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:09.530 00:36:03 -- common/autotest_common.sh@960 -- # wait 129566 00:22:09.530 [2024-04-24 00:36:03.281677] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:10.939 ************************************ 00:22:10.939 END TEST raid_state_function_test 00:22:10.939 ************************************ 00:22:10.939 00:36:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:10.939 00:22:10.939 real 0m16.070s 00:22:10.939 user 0m27.891s 00:22:10.939 sys 0m2.114s 00:22:10.939 00:36:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:10.939 00:36:04 -- common/autotest_common.sh@10 -- # set +x 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:22:11.199 00:36:04 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:11.199 00:36:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:11.199 00:36:04 -- common/autotest_common.sh@10 -- # set +x 00:22:11.199 ************************************ 00:22:11.199 START TEST raid_state_function_test_sb 00:22:11.199 ************************************ 00:22:11.199 00:36:04 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 4 true 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=130036 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130036' 00:22:11.199 Process raid pid: 130036 00:22:11.199 00:36:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130036 /var/tmp/spdk-raid.sock 00:22:11.199 00:36:04 -- common/autotest_common.sh@817 -- # '[' -z 130036 ']' 00:22:11.199 00:36:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:11.199 00:36:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:11.199 00:36:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:11.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:11.199 00:36:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:11.199 00:36:04 -- common/autotest_common.sh@10 -- # set +x 00:22:11.199 [2024-04-24 00:36:04.886292] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:22:11.199 [2024-04-24 00:36:04.886727] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.457 [2024-04-24 00:36:05.071107] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.715 [2024-04-24 00:36:05.298246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.974 [2024-04-24 00:36:05.540760] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:12.232 00:36:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:12.232 00:36:05 -- common/autotest_common.sh@850 -- # return 0 00:22:12.232 00:36:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:12.491 [2024-04-24 00:36:06.073261] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:12.491 [2024-04-24 00:36:06.073546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:12.491 [2024-04-24 00:36:06.073650] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:12.491 [2024-04-24 00:36:06.073726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:12.491 [2024-04-24 00:36:06.073863] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:12.491 [2024-04-24 00:36:06.073939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:12.491 [2024-04-24 00:36:06.074078] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:12.491 [2024-04-24 00:36:06.074139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:12.491 00:36:06 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:12.491 00:36:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:12.491 00:36:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:12.491 00:36:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:12.491 00:36:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:12.491 00:36:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:12.491 00:36:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:12.491 00:36:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:12.491 00:36:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:12.491 00:36:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:12.491 00:36:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.491 00:36:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.749 00:36:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:12.749 "name": "Existed_Raid", 00:22:12.750 "uuid": "89cb3aa7-6cc9-41c7-9a5c-b8c70f72b58c", 00:22:12.750 "strip_size_kb": 0, 00:22:12.750 "state": "configuring", 00:22:12.750 "raid_level": "raid1", 00:22:12.750 "superblock": true, 00:22:12.750 "num_base_bdevs": 4, 00:22:12.750 "num_base_bdevs_discovered": 0, 00:22:12.750 "num_base_bdevs_operational": 4, 00:22:12.750 "base_bdevs_list": [ 00:22:12.750 { 00:22:12.750 "name": "BaseBdev1", 00:22:12.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.750 "is_configured": false, 00:22:12.750 "data_offset": 0, 00:22:12.750 "data_size": 0 00:22:12.750 }, 00:22:12.750 { 00:22:12.750 "name": "BaseBdev2", 00:22:12.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.750 "is_configured": false, 00:22:12.750 "data_offset": 0, 00:22:12.750 "data_size": 0 00:22:12.750 }, 00:22:12.750 { 00:22:12.750 "name": "BaseBdev3", 00:22:12.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.750 "is_configured": false, 00:22:12.750 "data_offset": 0, 00:22:12.750 "data_size": 0 00:22:12.750 }, 00:22:12.750 { 00:22:12.750 "name": "BaseBdev4", 00:22:12.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.750 "is_configured": false, 00:22:12.750 "data_offset": 0, 00:22:12.750 "data_size": 0 00:22:12.750 } 00:22:12.750 ] 00:22:12.750 }' 00:22:12.750 00:36:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:12.750 00:36:06 -- common/autotest_common.sh@10 -- # set +x 00:22:13.316 00:36:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:13.316 [2024-04-24 00:36:07.053304] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:13.316 [2024-04-24 00:36:07.053577] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:22:13.316 00:36:07 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:13.575 [2024-04-24 00:36:07.265417] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:13.575 [2024-04-24 00:36:07.265635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:13.575 [2024-04-24 00:36:07.265778] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:13.575 [2024-04-24 00:36:07.265901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:13.575 [2024-04-24 00:36:07.265979] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:13.575 [2024-04-24 00:36:07.266057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:13.575 [2024-04-24 00:36:07.266195] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:13.575 [2024-04-24 00:36:07.266251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:13.575 00:36:07 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:13.832 [2024-04-24 00:36:07.521692] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:13.832 BaseBdev1 00:22:13.832 00:36:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:13.832 00:36:07 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:22:13.832 00:36:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:13.832 00:36:07 -- common/autotest_common.sh@887 -- # local i 00:22:13.832 00:36:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:13.832 00:36:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:13.832 00:36:07 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:14.090 00:36:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:14.348 [ 00:22:14.348 { 00:22:14.348 "name": "BaseBdev1", 00:22:14.348 "aliases": [ 00:22:14.348 "32ae68e1-fe24-495d-a84c-a149046746bf" 00:22:14.348 ], 00:22:14.348 "product_name": "Malloc disk", 00:22:14.348 "block_size": 512, 00:22:14.348 "num_blocks": 65536, 00:22:14.348 "uuid": "32ae68e1-fe24-495d-a84c-a149046746bf", 00:22:14.348 "assigned_rate_limits": { 00:22:14.348 "rw_ios_per_sec": 0, 00:22:14.348 "rw_mbytes_per_sec": 0, 00:22:14.348 "r_mbytes_per_sec": 0, 00:22:14.348 "w_mbytes_per_sec": 0 00:22:14.348 }, 00:22:14.348 "claimed": true, 00:22:14.348 "claim_type": "exclusive_write", 00:22:14.348 "zoned": false, 00:22:14.348 "supported_io_types": { 00:22:14.348 "read": true, 00:22:14.348 "write": true, 00:22:14.348 "unmap": true, 00:22:14.348 "write_zeroes": true, 00:22:14.348 "flush": true, 00:22:14.348 "reset": true, 00:22:14.348 "compare": false, 00:22:14.348 "compare_and_write": false, 00:22:14.348 "abort": true, 00:22:14.348 "nvme_admin": false, 00:22:14.348 "nvme_io": false 00:22:14.348 }, 00:22:14.348 "memory_domains": [ 00:22:14.348 { 00:22:14.348 "dma_device_id": "system", 00:22:14.348 "dma_device_type": 1 00:22:14.348 }, 00:22:14.348 { 00:22:14.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.348 "dma_device_type": 2 00:22:14.348 } 00:22:14.348 ], 00:22:14.348 "driver_specific": {} 00:22:14.348 } 00:22:14.348 ] 00:22:14.348 00:36:08 -- common/autotest_common.sh@893 -- # return 0 00:22:14.348 00:36:08 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:14.348 00:36:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:14.348 00:36:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:14.348 00:36:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:14.348 00:36:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:14.348 00:36:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:14.348 00:36:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:14.348 00:36:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:14.348 00:36:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:14.348 00:36:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:14.348 00:36:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.348 00:36:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.606 00:36:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:14.606 "name": "Existed_Raid", 00:22:14.606 "uuid": "0c806ed1-7bbf-4a76-a71b-03f504584727", 00:22:14.606 "strip_size_kb": 0, 00:22:14.606 "state": "configuring", 00:22:14.606 "raid_level": "raid1", 00:22:14.606 "superblock": true, 00:22:14.606 "num_base_bdevs": 4, 00:22:14.606 "num_base_bdevs_discovered": 1, 00:22:14.606 "num_base_bdevs_operational": 4, 00:22:14.606 "base_bdevs_list": [ 00:22:14.606 { 00:22:14.606 "name": "BaseBdev1", 00:22:14.606 "uuid": "32ae68e1-fe24-495d-a84c-a149046746bf", 00:22:14.606 "is_configured": true, 00:22:14.606 "data_offset": 2048, 00:22:14.606 "data_size": 63488 00:22:14.606 }, 00:22:14.606 { 00:22:14.606 "name": "BaseBdev2", 00:22:14.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.606 "is_configured": false, 00:22:14.606 "data_offset": 0, 00:22:14.606 "data_size": 0 00:22:14.606 }, 00:22:14.606 { 00:22:14.606 "name": "BaseBdev3", 00:22:14.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.606 "is_configured": false, 00:22:14.606 "data_offset": 0, 00:22:14.606 "data_size": 0 00:22:14.606 }, 00:22:14.606 { 00:22:14.606 "name": "BaseBdev4", 00:22:14.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.606 "is_configured": false, 00:22:14.606 "data_offset": 0, 00:22:14.606 "data_size": 0 00:22:14.606 } 00:22:14.606 ] 00:22:14.606 }' 00:22:14.606 00:36:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:14.606 00:36:08 -- common/autotest_common.sh@10 -- # set +x 00:22:15.172 00:36:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:15.431 [2024-04-24 00:36:09.178120] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:15.431 [2024-04-24 00:36:09.178387] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:22:15.431 00:36:09 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:22:15.431 00:36:09 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:15.997 00:36:09 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:16.255 BaseBdev1 00:22:16.255 00:36:09 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:22:16.255 00:36:09 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:22:16.255 00:36:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:16.255 00:36:09 -- common/autotest_common.sh@887 -- # local i 00:22:16.255 00:36:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:16.255 00:36:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:16.255 00:36:09 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:16.513 00:36:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:16.513 [ 00:22:16.513 { 00:22:16.513 "name": "BaseBdev1", 00:22:16.513 "aliases": [ 00:22:16.513 "5c003d7c-f1a4-4834-9683-0dc9315deed0" 00:22:16.513 ], 00:22:16.513 "product_name": "Malloc disk", 00:22:16.513 "block_size": 512, 00:22:16.513 "num_blocks": 65536, 00:22:16.513 "uuid": "5c003d7c-f1a4-4834-9683-0dc9315deed0", 00:22:16.513 "assigned_rate_limits": { 00:22:16.513 "rw_ios_per_sec": 0, 00:22:16.513 "rw_mbytes_per_sec": 0, 00:22:16.513 "r_mbytes_per_sec": 0, 00:22:16.513 "w_mbytes_per_sec": 0 00:22:16.513 }, 00:22:16.513 "claimed": false, 00:22:16.513 "zoned": false, 00:22:16.513 "supported_io_types": { 00:22:16.513 "read": true, 00:22:16.513 "write": true, 00:22:16.513 "unmap": true, 00:22:16.513 "write_zeroes": true, 00:22:16.513 "flush": true, 00:22:16.513 "reset": true, 00:22:16.513 "compare": false, 00:22:16.513 "compare_and_write": false, 00:22:16.513 "abort": true, 00:22:16.513 "nvme_admin": false, 00:22:16.513 "nvme_io": false 00:22:16.513 }, 00:22:16.513 "memory_domains": [ 00:22:16.513 { 00:22:16.513 "dma_device_id": "system", 00:22:16.513 "dma_device_type": 1 00:22:16.513 }, 00:22:16.513 { 00:22:16.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.513 "dma_device_type": 2 00:22:16.513 } 00:22:16.513 ], 00:22:16.513 "driver_specific": {} 00:22:16.513 } 00:22:16.513 ] 00:22:16.513 00:36:10 -- common/autotest_common.sh@893 -- # return 0 00:22:16.513 00:36:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:16.771 [2024-04-24 00:36:10.456931] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:16.771 [2024-04-24 00:36:10.459209] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:16.771 [2024-04-24 00:36:10.459395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:16.771 [2024-04-24 00:36:10.459502] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:16.771 [2024-04-24 00:36:10.459621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:16.771 [2024-04-24 00:36:10.459708] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:16.771 [2024-04-24 00:36:10.459795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.771 00:36:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.030 00:36:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:17.030 "name": "Existed_Raid", 00:22:17.030 "uuid": "95a2c21e-b426-4d05-9e06-cf4581190984", 00:22:17.030 "strip_size_kb": 0, 00:22:17.030 "state": "configuring", 00:22:17.030 "raid_level": "raid1", 00:22:17.030 "superblock": true, 00:22:17.030 "num_base_bdevs": 4, 00:22:17.030 "num_base_bdevs_discovered": 1, 00:22:17.030 "num_base_bdevs_operational": 4, 00:22:17.030 "base_bdevs_list": [ 00:22:17.030 { 00:22:17.030 "name": "BaseBdev1", 00:22:17.030 "uuid": "5c003d7c-f1a4-4834-9683-0dc9315deed0", 00:22:17.030 "is_configured": true, 00:22:17.030 "data_offset": 2048, 00:22:17.030 "data_size": 63488 00:22:17.030 }, 00:22:17.030 { 00:22:17.030 "name": "BaseBdev2", 00:22:17.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.030 "is_configured": false, 00:22:17.030 "data_offset": 0, 00:22:17.030 "data_size": 0 00:22:17.030 }, 00:22:17.030 { 00:22:17.030 "name": "BaseBdev3", 00:22:17.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.030 "is_configured": false, 00:22:17.030 "data_offset": 0, 00:22:17.030 "data_size": 0 00:22:17.030 }, 00:22:17.030 { 00:22:17.030 "name": "BaseBdev4", 00:22:17.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.030 "is_configured": false, 00:22:17.030 "data_offset": 0, 00:22:17.030 "data_size": 0 00:22:17.030 } 00:22:17.030 ] 00:22:17.030 }' 00:22:17.030 00:36:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:17.030 00:36:10 -- common/autotest_common.sh@10 -- # set +x 00:22:17.596 00:36:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:18.175 [2024-04-24 00:36:11.652423] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:18.175 BaseBdev2 00:22:18.175 00:36:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:18.175 00:36:11 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:22:18.175 00:36:11 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:18.175 00:36:11 -- common/autotest_common.sh@887 -- # local i 00:22:18.175 00:36:11 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:18.175 00:36:11 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:18.175 00:36:11 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:18.175 00:36:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:18.433 [ 00:22:18.433 { 00:22:18.433 "name": "BaseBdev2", 00:22:18.433 "aliases": [ 00:22:18.433 "3b1e9468-f8cd-4a24-b628-927dd0dcf0d3" 00:22:18.433 ], 00:22:18.433 "product_name": "Malloc disk", 00:22:18.433 "block_size": 512, 00:22:18.433 "num_blocks": 65536, 00:22:18.433 "uuid": "3b1e9468-f8cd-4a24-b628-927dd0dcf0d3", 00:22:18.433 "assigned_rate_limits": { 00:22:18.433 "rw_ios_per_sec": 0, 00:22:18.433 "rw_mbytes_per_sec": 0, 00:22:18.433 "r_mbytes_per_sec": 0, 00:22:18.433 "w_mbytes_per_sec": 0 00:22:18.433 }, 00:22:18.433 "claimed": true, 00:22:18.433 "claim_type": "exclusive_write", 00:22:18.433 "zoned": false, 00:22:18.433 "supported_io_types": { 00:22:18.433 "read": true, 00:22:18.433 "write": true, 00:22:18.433 "unmap": true, 00:22:18.433 "write_zeroes": true, 00:22:18.433 "flush": true, 00:22:18.433 "reset": true, 00:22:18.433 "compare": false, 00:22:18.433 "compare_and_write": false, 00:22:18.433 "abort": true, 00:22:18.433 "nvme_admin": false, 00:22:18.433 "nvme_io": false 00:22:18.433 }, 00:22:18.433 "memory_domains": [ 00:22:18.433 { 00:22:18.433 "dma_device_id": "system", 00:22:18.433 "dma_device_type": 1 00:22:18.433 }, 00:22:18.433 { 00:22:18.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.433 "dma_device_type": 2 00:22:18.433 } 00:22:18.433 ], 00:22:18.433 "driver_specific": {} 00:22:18.433 } 00:22:18.433 ] 00:22:18.433 00:36:12 -- common/autotest_common.sh@893 -- # return 0 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.433 00:36:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.692 00:36:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:18.692 "name": "Existed_Raid", 00:22:18.692 "uuid": "95a2c21e-b426-4d05-9e06-cf4581190984", 00:22:18.692 "strip_size_kb": 0, 00:22:18.692 "state": "configuring", 00:22:18.692 "raid_level": "raid1", 00:22:18.692 "superblock": true, 00:22:18.692 "num_base_bdevs": 4, 00:22:18.692 "num_base_bdevs_discovered": 2, 00:22:18.692 "num_base_bdevs_operational": 4, 00:22:18.692 "base_bdevs_list": [ 00:22:18.692 { 00:22:18.692 "name": "BaseBdev1", 00:22:18.692 "uuid": "5c003d7c-f1a4-4834-9683-0dc9315deed0", 00:22:18.692 "is_configured": true, 00:22:18.692 "data_offset": 2048, 00:22:18.692 "data_size": 63488 00:22:18.692 }, 00:22:18.692 { 00:22:18.692 "name": "BaseBdev2", 00:22:18.692 "uuid": "3b1e9468-f8cd-4a24-b628-927dd0dcf0d3", 00:22:18.692 "is_configured": true, 00:22:18.692 "data_offset": 2048, 00:22:18.692 "data_size": 63488 00:22:18.692 }, 00:22:18.692 { 00:22:18.692 "name": "BaseBdev3", 00:22:18.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.692 "is_configured": false, 00:22:18.692 "data_offset": 0, 00:22:18.692 "data_size": 0 00:22:18.692 }, 00:22:18.692 { 00:22:18.692 "name": "BaseBdev4", 00:22:18.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.692 "is_configured": false, 00:22:18.692 "data_offset": 0, 00:22:18.692 "data_size": 0 00:22:18.692 } 00:22:18.692 ] 00:22:18.692 }' 00:22:18.692 00:36:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:18.692 00:36:12 -- common/autotest_common.sh@10 -- # set +x 00:22:19.258 00:36:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:19.824 [2024-04-24 00:36:13.335490] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:19.824 BaseBdev3 00:22:19.824 00:36:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:19.824 00:36:13 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:22:19.824 00:36:13 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:19.824 00:36:13 -- common/autotest_common.sh@887 -- # local i 00:22:19.824 00:36:13 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:19.824 00:36:13 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:19.824 00:36:13 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:19.824 00:36:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:20.082 [ 00:22:20.082 { 00:22:20.082 "name": "BaseBdev3", 00:22:20.082 "aliases": [ 00:22:20.082 "50c16b94-7f79-4a06-9512-578b717f0875" 00:22:20.082 ], 00:22:20.082 "product_name": "Malloc disk", 00:22:20.082 "block_size": 512, 00:22:20.082 "num_blocks": 65536, 00:22:20.082 "uuid": "50c16b94-7f79-4a06-9512-578b717f0875", 00:22:20.082 "assigned_rate_limits": { 00:22:20.082 "rw_ios_per_sec": 0, 00:22:20.082 "rw_mbytes_per_sec": 0, 00:22:20.082 "r_mbytes_per_sec": 0, 00:22:20.082 "w_mbytes_per_sec": 0 00:22:20.082 }, 00:22:20.082 "claimed": true, 00:22:20.082 "claim_type": "exclusive_write", 00:22:20.082 "zoned": false, 00:22:20.082 "supported_io_types": { 00:22:20.082 "read": true, 00:22:20.082 "write": true, 00:22:20.082 "unmap": true, 00:22:20.082 "write_zeroes": true, 00:22:20.082 "flush": true, 00:22:20.082 "reset": true, 00:22:20.082 "compare": false, 00:22:20.082 "compare_and_write": false, 00:22:20.082 "abort": true, 00:22:20.082 "nvme_admin": false, 00:22:20.082 "nvme_io": false 00:22:20.082 }, 00:22:20.082 "memory_domains": [ 00:22:20.082 { 00:22:20.082 "dma_device_id": "system", 00:22:20.082 "dma_device_type": 1 00:22:20.082 }, 00:22:20.083 { 00:22:20.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.083 "dma_device_type": 2 00:22:20.083 } 00:22:20.083 ], 00:22:20.083 "driver_specific": {} 00:22:20.083 } 00:22:20.083 ] 00:22:20.083 00:36:13 -- common/autotest_common.sh@893 -- # return 0 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.083 00:36:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.342 00:36:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:20.342 "name": "Existed_Raid", 00:22:20.342 "uuid": "95a2c21e-b426-4d05-9e06-cf4581190984", 00:22:20.342 "strip_size_kb": 0, 00:22:20.342 "state": "configuring", 00:22:20.342 "raid_level": "raid1", 00:22:20.342 "superblock": true, 00:22:20.342 "num_base_bdevs": 4, 00:22:20.342 "num_base_bdevs_discovered": 3, 00:22:20.342 "num_base_bdevs_operational": 4, 00:22:20.342 "base_bdevs_list": [ 00:22:20.342 { 00:22:20.342 "name": "BaseBdev1", 00:22:20.342 "uuid": "5c003d7c-f1a4-4834-9683-0dc9315deed0", 00:22:20.342 "is_configured": true, 00:22:20.342 "data_offset": 2048, 00:22:20.342 "data_size": 63488 00:22:20.342 }, 00:22:20.342 { 00:22:20.342 "name": "BaseBdev2", 00:22:20.342 "uuid": "3b1e9468-f8cd-4a24-b628-927dd0dcf0d3", 00:22:20.342 "is_configured": true, 00:22:20.342 "data_offset": 2048, 00:22:20.342 "data_size": 63488 00:22:20.342 }, 00:22:20.342 { 00:22:20.342 "name": "BaseBdev3", 00:22:20.342 "uuid": "50c16b94-7f79-4a06-9512-578b717f0875", 00:22:20.342 "is_configured": true, 00:22:20.342 "data_offset": 2048, 00:22:20.342 "data_size": 63488 00:22:20.342 }, 00:22:20.342 { 00:22:20.342 "name": "BaseBdev4", 00:22:20.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.342 "is_configured": false, 00:22:20.342 "data_offset": 0, 00:22:20.342 "data_size": 0 00:22:20.342 } 00:22:20.342 ] 00:22:20.342 }' 00:22:20.342 00:36:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:20.342 00:36:14 -- common/autotest_common.sh@10 -- # set +x 00:22:20.982 00:36:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:21.241 [2024-04-24 00:36:14.936047] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:21.241 [2024-04-24 00:36:14.936508] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:22:21.241 [2024-04-24 00:36:14.936629] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:21.241 [2024-04-24 00:36:14.936814] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:22:21.241 [2024-04-24 00:36:14.937289] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:22:21.241 [2024-04-24 00:36:14.937406] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:22:21.241 BaseBdev4 00:22:21.241 [2024-04-24 00:36:14.937691] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.241 00:36:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:22:21.241 00:36:14 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:22:21.241 00:36:14 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:21.241 00:36:14 -- common/autotest_common.sh@887 -- # local i 00:22:21.241 00:36:14 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:21.241 00:36:14 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:21.241 00:36:14 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:21.500 00:36:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:21.759 [ 00:22:21.759 { 00:22:21.759 "name": "BaseBdev4", 00:22:21.759 "aliases": [ 00:22:21.759 "05384394-9dfb-4df1-b2f7-ef67ced08b8c" 00:22:21.759 ], 00:22:21.759 "product_name": "Malloc disk", 00:22:21.759 "block_size": 512, 00:22:21.759 "num_blocks": 65536, 00:22:21.759 "uuid": "05384394-9dfb-4df1-b2f7-ef67ced08b8c", 00:22:21.759 "assigned_rate_limits": { 00:22:21.759 "rw_ios_per_sec": 0, 00:22:21.759 "rw_mbytes_per_sec": 0, 00:22:21.759 "r_mbytes_per_sec": 0, 00:22:21.759 "w_mbytes_per_sec": 0 00:22:21.759 }, 00:22:21.759 "claimed": true, 00:22:21.759 "claim_type": "exclusive_write", 00:22:21.759 "zoned": false, 00:22:21.759 "supported_io_types": { 00:22:21.759 "read": true, 00:22:21.759 "write": true, 00:22:21.759 "unmap": true, 00:22:21.759 "write_zeroes": true, 00:22:21.759 "flush": true, 00:22:21.759 "reset": true, 00:22:21.759 "compare": false, 00:22:21.759 "compare_and_write": false, 00:22:21.759 "abort": true, 00:22:21.759 "nvme_admin": false, 00:22:21.759 "nvme_io": false 00:22:21.759 }, 00:22:21.759 "memory_domains": [ 00:22:21.759 { 00:22:21.759 "dma_device_id": "system", 00:22:21.759 "dma_device_type": 1 00:22:21.759 }, 00:22:21.759 { 00:22:21.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.759 "dma_device_type": 2 00:22:21.759 } 00:22:21.759 ], 00:22:21.759 "driver_specific": {} 00:22:21.759 } 00:22:21.759 ] 00:22:21.759 00:36:15 -- common/autotest_common.sh@893 -- # return 0 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:21.759 00:36:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.016 00:36:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:22.016 "name": "Existed_Raid", 00:22:22.016 "uuid": "95a2c21e-b426-4d05-9e06-cf4581190984", 00:22:22.016 "strip_size_kb": 0, 00:22:22.016 "state": "online", 00:22:22.016 "raid_level": "raid1", 00:22:22.016 "superblock": true, 00:22:22.016 "num_base_bdevs": 4, 00:22:22.016 "num_base_bdevs_discovered": 4, 00:22:22.016 "num_base_bdevs_operational": 4, 00:22:22.016 "base_bdevs_list": [ 00:22:22.016 { 00:22:22.016 "name": "BaseBdev1", 00:22:22.016 "uuid": "5c003d7c-f1a4-4834-9683-0dc9315deed0", 00:22:22.016 "is_configured": true, 00:22:22.016 "data_offset": 2048, 00:22:22.016 "data_size": 63488 00:22:22.016 }, 00:22:22.016 { 00:22:22.016 "name": "BaseBdev2", 00:22:22.016 "uuid": "3b1e9468-f8cd-4a24-b628-927dd0dcf0d3", 00:22:22.016 "is_configured": true, 00:22:22.016 "data_offset": 2048, 00:22:22.016 "data_size": 63488 00:22:22.016 }, 00:22:22.016 { 00:22:22.016 "name": "BaseBdev3", 00:22:22.016 "uuid": "50c16b94-7f79-4a06-9512-578b717f0875", 00:22:22.016 "is_configured": true, 00:22:22.016 "data_offset": 2048, 00:22:22.016 "data_size": 63488 00:22:22.016 }, 00:22:22.016 { 00:22:22.016 "name": "BaseBdev4", 00:22:22.016 "uuid": "05384394-9dfb-4df1-b2f7-ef67ced08b8c", 00:22:22.016 "is_configured": true, 00:22:22.016 "data_offset": 2048, 00:22:22.016 "data_size": 63488 00:22:22.016 } 00:22:22.016 ] 00:22:22.016 }' 00:22:22.016 00:36:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:22.016 00:36:15 -- common/autotest_common.sh@10 -- # set +x 00:22:22.584 00:36:16 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:22.877 [2024-04-24 00:36:16.408522] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.877 00:36:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.135 00:36:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:23.135 "name": "Existed_Raid", 00:22:23.135 "uuid": "95a2c21e-b426-4d05-9e06-cf4581190984", 00:22:23.135 "strip_size_kb": 0, 00:22:23.135 "state": "online", 00:22:23.135 "raid_level": "raid1", 00:22:23.135 "superblock": true, 00:22:23.135 "num_base_bdevs": 4, 00:22:23.135 "num_base_bdevs_discovered": 3, 00:22:23.135 "num_base_bdevs_operational": 3, 00:22:23.135 "base_bdevs_list": [ 00:22:23.135 { 00:22:23.135 "name": null, 00:22:23.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.135 "is_configured": false, 00:22:23.135 "data_offset": 2048, 00:22:23.135 "data_size": 63488 00:22:23.135 }, 00:22:23.135 { 00:22:23.135 "name": "BaseBdev2", 00:22:23.135 "uuid": "3b1e9468-f8cd-4a24-b628-927dd0dcf0d3", 00:22:23.135 "is_configured": true, 00:22:23.135 "data_offset": 2048, 00:22:23.135 "data_size": 63488 00:22:23.135 }, 00:22:23.135 { 00:22:23.135 "name": "BaseBdev3", 00:22:23.135 "uuid": "50c16b94-7f79-4a06-9512-578b717f0875", 00:22:23.135 "is_configured": true, 00:22:23.135 "data_offset": 2048, 00:22:23.135 "data_size": 63488 00:22:23.135 }, 00:22:23.135 { 00:22:23.135 "name": "BaseBdev4", 00:22:23.135 "uuid": "05384394-9dfb-4df1-b2f7-ef67ced08b8c", 00:22:23.135 "is_configured": true, 00:22:23.135 "data_offset": 2048, 00:22:23.135 "data_size": 63488 00:22:23.135 } 00:22:23.135 ] 00:22:23.135 }' 00:22:23.135 00:36:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:23.135 00:36:16 -- common/autotest_common.sh@10 -- # set +x 00:22:23.700 00:36:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:23.700 00:36:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:23.700 00:36:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.700 00:36:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:23.958 00:36:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:23.958 00:36:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:23.958 00:36:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:24.216 [2024-04-24 00:36:17.902457] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:24.474 00:36:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:24.474 00:36:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:24.474 00:36:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.474 00:36:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:24.474 00:36:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:24.474 00:36:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:24.474 00:36:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:25.039 [2024-04-24 00:36:18.533631] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:25.039 00:36:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:25.039 00:36:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:25.039 00:36:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.039 00:36:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:25.302 00:36:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:25.302 00:36:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:25.302 00:36:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:25.302 [2024-04-24 00:36:19.040282] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:25.302 [2024-04-24 00:36:19.040587] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:25.562 [2024-04-24 00:36:19.144664] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:25.562 [2024-04-24 00:36:19.145041] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:25.562 [2024-04-24 00:36:19.145136] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:22:25.562 00:36:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:25.562 00:36:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:25.562 00:36:19 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:25.562 00:36:19 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.822 00:36:19 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:25.822 00:36:19 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:25.822 00:36:19 -- bdev/bdev_raid.sh@287 -- # killprocess 130036 00:22:25.822 00:36:19 -- common/autotest_common.sh@936 -- # '[' -z 130036 ']' 00:22:25.822 00:36:19 -- common/autotest_common.sh@940 -- # kill -0 130036 00:22:25.822 00:36:19 -- common/autotest_common.sh@941 -- # uname 00:22:25.822 00:36:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:25.822 00:36:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130036 00:22:25.822 killing process with pid 130036 00:22:25.822 00:36:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:25.822 00:36:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:25.822 00:36:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130036' 00:22:25.822 00:36:19 -- common/autotest_common.sh@955 -- # kill 130036 00:22:25.822 [2024-04-24 00:36:19.483773] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:25.822 00:36:19 -- common/autotest_common.sh@960 -- # wait 130036 00:22:25.822 [2024-04-24 00:36:19.483892] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:27.194 ************************************ 00:22:27.194 END TEST raid_state_function_test_sb 00:22:27.194 ************************************ 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:27.194 00:22:27.194 real 0m16.075s 00:22:27.194 user 0m27.789s 00:22:27.194 sys 0m2.264s 00:22:27.194 00:36:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:27.194 00:36:20 -- common/autotest_common.sh@10 -- # set +x 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:22:27.194 00:36:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:22:27.194 00:36:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:27.194 00:36:20 -- common/autotest_common.sh@10 -- # set +x 00:22:27.194 ************************************ 00:22:27.194 START TEST raid_superblock_test 00:22:27.194 ************************************ 00:22:27.194 00:36:20 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid1 4 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@357 -- # raid_pid=130499 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@358 -- # waitforlisten 130499 /var/tmp/spdk-raid.sock 00:22:27.194 00:36:20 -- common/autotest_common.sh@817 -- # '[' -z 130499 ']' 00:22:27.194 00:36:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:27.194 00:36:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:27.194 00:36:20 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:27.194 00:36:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:27.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:27.194 00:36:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:27.194 00:36:20 -- common/autotest_common.sh@10 -- # set +x 00:22:27.508 [2024-04-24 00:36:21.057651] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:22:27.508 [2024-04-24 00:36:21.058148] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130499 ] 00:22:27.508 [2024-04-24 00:36:21.240390] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.769 [2024-04-24 00:36:21.507178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.029 [2024-04-24 00:36:21.735038] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:28.286 00:36:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:28.286 00:36:21 -- common/autotest_common.sh@850 -- # return 0 00:22:28.286 00:36:21 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:22:28.286 00:36:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:28.286 00:36:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:22:28.286 00:36:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:22:28.286 00:36:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:28.286 00:36:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:28.286 00:36:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:28.286 00:36:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:28.286 00:36:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:28.544 malloc1 00:22:28.544 00:36:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:28.801 [2024-04-24 00:36:22.495215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:28.801 [2024-04-24 00:36:22.495496] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.801 [2024-04-24 00:36:22.495656] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:22:28.801 [2024-04-24 00:36:22.495865] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.801 [2024-04-24 00:36:22.498978] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.801 [2024-04-24 00:36:22.499164] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:28.801 pt1 00:22:28.801 00:36:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:28.801 00:36:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:28.801 00:36:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:22:28.801 00:36:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:22:28.801 00:36:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:28.801 00:36:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:28.801 00:36:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:28.801 00:36:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:28.801 00:36:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:29.364 malloc2 00:22:29.364 00:36:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:29.621 [2024-04-24 00:36:23.167710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:29.621 [2024-04-24 00:36:23.168020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.621 [2024-04-24 00:36:23.168117] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:29.621 [2024-04-24 00:36:23.168289] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.621 [2024-04-24 00:36:23.171102] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.621 [2024-04-24 00:36:23.171306] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:29.621 pt2 00:22:29.621 00:36:23 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:29.621 00:36:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:29.621 00:36:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:22:29.621 00:36:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:22:29.621 00:36:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:29.621 00:36:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:29.621 00:36:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:29.621 00:36:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:29.621 00:36:23 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:29.878 malloc3 00:22:29.878 00:36:23 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:30.135 [2024-04-24 00:36:23.731911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:30.135 [2024-04-24 00:36:23.732210] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.135 [2024-04-24 00:36:23.732314] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:22:30.135 [2024-04-24 00:36:23.732484] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.135 [2024-04-24 00:36:23.735491] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.135 [2024-04-24 00:36:23.735699] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:30.135 pt3 00:22:30.135 00:36:23 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:30.135 00:36:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:30.135 00:36:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:22:30.135 00:36:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:22:30.135 00:36:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:30.135 00:36:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:30.135 00:36:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:30.135 00:36:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:30.136 00:36:23 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:30.394 malloc4 00:22:30.394 00:36:23 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:30.651 [2024-04-24 00:36:24.255063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:30.651 [2024-04-24 00:36:24.255344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.651 [2024-04-24 00:36:24.255417] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:22:30.651 [2024-04-24 00:36:24.255688] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.651 [2024-04-24 00:36:24.258313] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.651 [2024-04-24 00:36:24.258518] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:30.651 pt4 00:22:30.651 00:36:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:30.651 00:36:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:30.651 00:36:24 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:30.910 [2024-04-24 00:36:24.467361] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:30.910 [2024-04-24 00:36:24.469696] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:30.910 [2024-04-24 00:36:24.469941] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:30.910 [2024-04-24 00:36:24.470034] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:30.910 [2024-04-24 00:36:24.470386] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:22:30.910 [2024-04-24 00:36:24.470513] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:30.910 [2024-04-24 00:36:24.470723] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:30.910 [2024-04-24 00:36:24.471218] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:22:30.910 [2024-04-24 00:36:24.471333] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:22:30.910 [2024-04-24 00:36:24.471623] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.910 00:36:24 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:30.910 00:36:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:30.910 00:36:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:30.910 00:36:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:30.910 00:36:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:30.910 00:36:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:30.910 00:36:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:30.910 00:36:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:30.910 00:36:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:30.910 00:36:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:30.910 00:36:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.910 00:36:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.168 00:36:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:31.168 "name": "raid_bdev1", 00:22:31.168 "uuid": "91f71aa2-5a94-4e62-9395-981328cd3324", 00:22:31.168 "strip_size_kb": 0, 00:22:31.168 "state": "online", 00:22:31.168 "raid_level": "raid1", 00:22:31.168 "superblock": true, 00:22:31.168 "num_base_bdevs": 4, 00:22:31.168 "num_base_bdevs_discovered": 4, 00:22:31.168 "num_base_bdevs_operational": 4, 00:22:31.168 "base_bdevs_list": [ 00:22:31.168 { 00:22:31.168 "name": "pt1", 00:22:31.168 "uuid": "c92a5e00-68e5-5a35-b234-9dac598c3118", 00:22:31.168 "is_configured": true, 00:22:31.168 "data_offset": 2048, 00:22:31.168 "data_size": 63488 00:22:31.168 }, 00:22:31.168 { 00:22:31.168 "name": "pt2", 00:22:31.168 "uuid": "20d1e7e2-3db9-5f53-999d-9fe9a249c191", 00:22:31.168 "is_configured": true, 00:22:31.168 "data_offset": 2048, 00:22:31.168 "data_size": 63488 00:22:31.168 }, 00:22:31.168 { 00:22:31.168 "name": "pt3", 00:22:31.168 "uuid": "126f8ecf-cd38-5888-9cbb-19ca1f4aede4", 00:22:31.168 "is_configured": true, 00:22:31.168 "data_offset": 2048, 00:22:31.168 "data_size": 63488 00:22:31.168 }, 00:22:31.168 { 00:22:31.168 "name": "pt4", 00:22:31.168 "uuid": "48130c8d-0579-5e9f-a35e-d5e5c00f0322", 00:22:31.168 "is_configured": true, 00:22:31.168 "data_offset": 2048, 00:22:31.168 "data_size": 63488 00:22:31.168 } 00:22:31.168 ] 00:22:31.168 }' 00:22:31.168 00:36:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:31.168 00:36:24 -- common/autotest_common.sh@10 -- # set +x 00:22:31.734 00:36:25 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:22:31.734 00:36:25 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:31.991 [2024-04-24 00:36:25.620098] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:31.992 00:36:25 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=91f71aa2-5a94-4e62-9395-981328cd3324 00:22:31.992 00:36:25 -- bdev/bdev_raid.sh@380 -- # '[' -z 91f71aa2-5a94-4e62-9395-981328cd3324 ']' 00:22:31.992 00:36:25 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:32.250 [2024-04-24 00:36:25.879873] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:32.250 [2024-04-24 00:36:25.880084] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:32.250 [2024-04-24 00:36:25.880246] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:32.250 [2024-04-24 00:36:25.880421] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:32.250 [2024-04-24 00:36:25.880513] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:22:32.250 00:36:25 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:22:32.250 00:36:25 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.507 00:36:26 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:22:32.507 00:36:26 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:22:32.507 00:36:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:32.507 00:36:26 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:32.766 00:36:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:32.766 00:36:26 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:33.023 00:36:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:33.023 00:36:26 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:33.282 00:36:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:33.282 00:36:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:33.540 00:36:27 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:33.540 00:36:27 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:33.796 00:36:27 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:22:33.796 00:36:27 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:33.796 00:36:27 -- common/autotest_common.sh@638 -- # local es=0 00:22:33.796 00:36:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:33.796 00:36:27 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:33.796 00:36:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:33.796 00:36:27 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:33.796 00:36:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:33.796 00:36:27 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:33.796 00:36:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:33.796 00:36:27 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:33.796 00:36:27 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:33.796 00:36:27 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:34.092 [2024-04-24 00:36:27.752206] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:34.092 [2024-04-24 00:36:27.754563] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:34.092 [2024-04-24 00:36:27.754767] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:34.092 [2024-04-24 00:36:27.754911] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:34.092 [2024-04-24 00:36:27.755060] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:22:34.092 [2024-04-24 00:36:27.755226] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:22:34.092 [2024-04-24 00:36:27.755351] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:22:34.092 [2024-04-24 00:36:27.755442] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:22:34.092 [2024-04-24 00:36:27.755555] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:34.092 [2024-04-24 00:36:27.755593] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:22:34.092 request: 00:22:34.092 { 00:22:34.092 "name": "raid_bdev1", 00:22:34.092 "raid_level": "raid1", 00:22:34.092 "base_bdevs": [ 00:22:34.092 "malloc1", 00:22:34.092 "malloc2", 00:22:34.092 "malloc3", 00:22:34.092 "malloc4" 00:22:34.092 ], 00:22:34.092 "superblock": false, 00:22:34.092 "method": "bdev_raid_create", 00:22:34.092 "req_id": 1 00:22:34.092 } 00:22:34.092 Got JSON-RPC error response 00:22:34.092 response: 00:22:34.092 { 00:22:34.092 "code": -17, 00:22:34.092 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:34.092 } 00:22:34.092 00:36:27 -- common/autotest_common.sh@641 -- # es=1 00:22:34.092 00:36:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:34.092 00:36:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:34.092 00:36:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:34.092 00:36:27 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.092 00:36:27 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:22:34.350 00:36:27 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:22:34.350 00:36:27 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:22:34.350 00:36:27 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:34.607 [2024-04-24 00:36:28.244298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:34.607 [2024-04-24 00:36:28.244561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.607 [2024-04-24 00:36:28.244688] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:34.607 [2024-04-24 00:36:28.244789] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.607 [2024-04-24 00:36:28.247370] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.607 [2024-04-24 00:36:28.247558] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:34.607 [2024-04-24 00:36:28.247798] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:34.607 [2024-04-24 00:36:28.247932] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:34.607 pt1 00:22:34.607 00:36:28 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:34.607 00:36:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:34.607 00:36:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:34.607 00:36:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:34.607 00:36:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:34.607 00:36:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:34.607 00:36:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:34.607 00:36:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:34.607 00:36:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:34.607 00:36:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:34.607 00:36:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.607 00:36:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.865 00:36:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:34.865 "name": "raid_bdev1", 00:22:34.865 "uuid": "91f71aa2-5a94-4e62-9395-981328cd3324", 00:22:34.865 "strip_size_kb": 0, 00:22:34.865 "state": "configuring", 00:22:34.865 "raid_level": "raid1", 00:22:34.865 "superblock": true, 00:22:34.865 "num_base_bdevs": 4, 00:22:34.865 "num_base_bdevs_discovered": 1, 00:22:34.865 "num_base_bdevs_operational": 4, 00:22:34.865 "base_bdevs_list": [ 00:22:34.865 { 00:22:34.865 "name": "pt1", 00:22:34.865 "uuid": "c92a5e00-68e5-5a35-b234-9dac598c3118", 00:22:34.865 "is_configured": true, 00:22:34.865 "data_offset": 2048, 00:22:34.865 "data_size": 63488 00:22:34.865 }, 00:22:34.865 { 00:22:34.865 "name": null, 00:22:34.865 "uuid": "20d1e7e2-3db9-5f53-999d-9fe9a249c191", 00:22:34.865 "is_configured": false, 00:22:34.865 "data_offset": 2048, 00:22:34.865 "data_size": 63488 00:22:34.865 }, 00:22:34.865 { 00:22:34.865 "name": null, 00:22:34.865 "uuid": "126f8ecf-cd38-5888-9cbb-19ca1f4aede4", 00:22:34.865 "is_configured": false, 00:22:34.865 "data_offset": 2048, 00:22:34.865 "data_size": 63488 00:22:34.865 }, 00:22:34.865 { 00:22:34.865 "name": null, 00:22:34.865 "uuid": "48130c8d-0579-5e9f-a35e-d5e5c00f0322", 00:22:34.865 "is_configured": false, 00:22:34.865 "data_offset": 2048, 00:22:34.865 "data_size": 63488 00:22:34.865 } 00:22:34.865 ] 00:22:34.865 }' 00:22:34.865 00:36:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:34.865 00:36:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.431 00:36:29 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:22:35.431 00:36:29 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:35.689 [2024-04-24 00:36:29.308526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:35.689 [2024-04-24 00:36:29.308804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.689 [2024-04-24 00:36:29.308946] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:35.689 [2024-04-24 00:36:29.309061] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.689 [2024-04-24 00:36:29.309647] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.689 [2024-04-24 00:36:29.309836] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:35.689 [2024-04-24 00:36:29.310086] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:35.689 [2024-04-24 00:36:29.310215] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:35.689 pt2 00:22:35.689 00:36:29 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:35.948 [2024-04-24 00:36:29.576618] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:35.948 00:36:29 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:35.948 00:36:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:35.948 00:36:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:35.948 00:36:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:35.948 00:36:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:35.948 00:36:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:35.948 00:36:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:35.948 00:36:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:35.948 00:36:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:35.948 00:36:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:35.948 00:36:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.948 00:36:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.207 00:36:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:36.207 "name": "raid_bdev1", 00:22:36.207 "uuid": "91f71aa2-5a94-4e62-9395-981328cd3324", 00:22:36.207 "strip_size_kb": 0, 00:22:36.207 "state": "configuring", 00:22:36.207 "raid_level": "raid1", 00:22:36.207 "superblock": true, 00:22:36.207 "num_base_bdevs": 4, 00:22:36.207 "num_base_bdevs_discovered": 1, 00:22:36.207 "num_base_bdevs_operational": 4, 00:22:36.207 "base_bdevs_list": [ 00:22:36.207 { 00:22:36.207 "name": "pt1", 00:22:36.207 "uuid": "c92a5e00-68e5-5a35-b234-9dac598c3118", 00:22:36.207 "is_configured": true, 00:22:36.207 "data_offset": 2048, 00:22:36.207 "data_size": 63488 00:22:36.207 }, 00:22:36.207 { 00:22:36.207 "name": null, 00:22:36.207 "uuid": "20d1e7e2-3db9-5f53-999d-9fe9a249c191", 00:22:36.207 "is_configured": false, 00:22:36.207 "data_offset": 2048, 00:22:36.207 "data_size": 63488 00:22:36.207 }, 00:22:36.207 { 00:22:36.207 "name": null, 00:22:36.207 "uuid": "126f8ecf-cd38-5888-9cbb-19ca1f4aede4", 00:22:36.207 "is_configured": false, 00:22:36.207 "data_offset": 2048, 00:22:36.207 "data_size": 63488 00:22:36.207 }, 00:22:36.207 { 00:22:36.207 "name": null, 00:22:36.207 "uuid": "48130c8d-0579-5e9f-a35e-d5e5c00f0322", 00:22:36.207 "is_configured": false, 00:22:36.207 "data_offset": 2048, 00:22:36.207 "data_size": 63488 00:22:36.207 } 00:22:36.207 ] 00:22:36.207 }' 00:22:36.207 00:36:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:36.207 00:36:29 -- common/autotest_common.sh@10 -- # set +x 00:22:36.774 00:36:30 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:22:36.774 00:36:30 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:36.774 00:36:30 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:37.032 [2024-04-24 00:36:30.664821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:37.032 [2024-04-24 00:36:30.665057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.032 [2024-04-24 00:36:30.665199] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:37.032 [2024-04-24 00:36:30.665294] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.032 [2024-04-24 00:36:30.665876] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.032 [2024-04-24 00:36:30.666058] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:37.032 [2024-04-24 00:36:30.666230] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:37.032 [2024-04-24 00:36:30.666352] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:37.032 pt2 00:22:37.032 00:36:30 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:37.032 00:36:30 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:37.032 00:36:30 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:37.291 [2024-04-24 00:36:30.940885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:37.291 [2024-04-24 00:36:30.941150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.291 [2024-04-24 00:36:30.941228] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:37.291 [2024-04-24 00:36:30.941366] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.291 [2024-04-24 00:36:30.941919] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.291 [2024-04-24 00:36:30.942109] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:37.291 [2024-04-24 00:36:30.942361] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:37.291 [2024-04-24 00:36:30.942465] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:37.291 pt3 00:22:37.291 00:36:30 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:37.291 00:36:30 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:37.291 00:36:30 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:37.552 [2024-04-24 00:36:31.160967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:37.552 [2024-04-24 00:36:31.161229] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.552 [2024-04-24 00:36:31.161301] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:37.552 [2024-04-24 00:36:31.161527] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.552 [2024-04-24 00:36:31.162052] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.552 [2024-04-24 00:36:31.162261] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:37.552 [2024-04-24 00:36:31.162516] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:37.552 [2024-04-24 00:36:31.162641] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:37.552 [2024-04-24 00:36:31.162852] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:22:37.552 [2024-04-24 00:36:31.162975] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:37.552 [2024-04-24 00:36:31.163138] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:37.552 [2024-04-24 00:36:31.163621] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:22:37.552 [2024-04-24 00:36:31.163768] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:22:37.552 [2024-04-24 00:36:31.164028] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.552 pt4 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.552 00:36:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.811 00:36:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:37.811 "name": "raid_bdev1", 00:22:37.811 "uuid": "91f71aa2-5a94-4e62-9395-981328cd3324", 00:22:37.811 "strip_size_kb": 0, 00:22:37.811 "state": "online", 00:22:37.811 "raid_level": "raid1", 00:22:37.811 "superblock": true, 00:22:37.811 "num_base_bdevs": 4, 00:22:37.811 "num_base_bdevs_discovered": 4, 00:22:37.811 "num_base_bdevs_operational": 4, 00:22:37.811 "base_bdevs_list": [ 00:22:37.811 { 00:22:37.811 "name": "pt1", 00:22:37.811 "uuid": "c92a5e00-68e5-5a35-b234-9dac598c3118", 00:22:37.811 "is_configured": true, 00:22:37.811 "data_offset": 2048, 00:22:37.811 "data_size": 63488 00:22:37.811 }, 00:22:37.811 { 00:22:37.811 "name": "pt2", 00:22:37.811 "uuid": "20d1e7e2-3db9-5f53-999d-9fe9a249c191", 00:22:37.811 "is_configured": true, 00:22:37.811 "data_offset": 2048, 00:22:37.811 "data_size": 63488 00:22:37.811 }, 00:22:37.811 { 00:22:37.811 "name": "pt3", 00:22:37.811 "uuid": "126f8ecf-cd38-5888-9cbb-19ca1f4aede4", 00:22:37.811 "is_configured": true, 00:22:37.811 "data_offset": 2048, 00:22:37.811 "data_size": 63488 00:22:37.811 }, 00:22:37.811 { 00:22:37.811 "name": "pt4", 00:22:37.811 "uuid": "48130c8d-0579-5e9f-a35e-d5e5c00f0322", 00:22:37.811 "is_configured": true, 00:22:37.811 "data_offset": 2048, 00:22:37.811 "data_size": 63488 00:22:37.811 } 00:22:37.811 ] 00:22:37.811 }' 00:22:37.811 00:36:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:37.811 00:36:31 -- common/autotest_common.sh@10 -- # set +x 00:22:38.377 00:36:32 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:38.377 00:36:32 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:22:38.635 [2024-04-24 00:36:32.301437] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:38.635 00:36:32 -- bdev/bdev_raid.sh@430 -- # '[' 91f71aa2-5a94-4e62-9395-981328cd3324 '!=' 91f71aa2-5a94-4e62-9395-981328cd3324 ']' 00:22:38.635 00:36:32 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:22:38.635 00:36:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:38.635 00:36:32 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:38.635 00:36:32 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:38.893 [2024-04-24 00:36:32.569257] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:38.893 00:36:32 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:38.893 00:36:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:38.893 00:36:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:38.893 00:36:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:38.893 00:36:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:38.893 00:36:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:38.893 00:36:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:38.893 00:36:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:38.893 00:36:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:38.893 00:36:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:38.893 00:36:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.893 00:36:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.158 00:36:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:39.158 "name": "raid_bdev1", 00:22:39.158 "uuid": "91f71aa2-5a94-4e62-9395-981328cd3324", 00:22:39.158 "strip_size_kb": 0, 00:22:39.158 "state": "online", 00:22:39.158 "raid_level": "raid1", 00:22:39.158 "superblock": true, 00:22:39.158 "num_base_bdevs": 4, 00:22:39.158 "num_base_bdevs_discovered": 3, 00:22:39.158 "num_base_bdevs_operational": 3, 00:22:39.158 "base_bdevs_list": [ 00:22:39.158 { 00:22:39.158 "name": null, 00:22:39.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.158 "is_configured": false, 00:22:39.158 "data_offset": 2048, 00:22:39.158 "data_size": 63488 00:22:39.158 }, 00:22:39.158 { 00:22:39.158 "name": "pt2", 00:22:39.158 "uuid": "20d1e7e2-3db9-5f53-999d-9fe9a249c191", 00:22:39.158 "is_configured": true, 00:22:39.158 "data_offset": 2048, 00:22:39.158 "data_size": 63488 00:22:39.158 }, 00:22:39.158 { 00:22:39.158 "name": "pt3", 00:22:39.158 "uuid": "126f8ecf-cd38-5888-9cbb-19ca1f4aede4", 00:22:39.158 "is_configured": true, 00:22:39.158 "data_offset": 2048, 00:22:39.158 "data_size": 63488 00:22:39.158 }, 00:22:39.158 { 00:22:39.158 "name": "pt4", 00:22:39.158 "uuid": "48130c8d-0579-5e9f-a35e-d5e5c00f0322", 00:22:39.158 "is_configured": true, 00:22:39.158 "data_offset": 2048, 00:22:39.158 "data_size": 63488 00:22:39.158 } 00:22:39.158 ] 00:22:39.158 }' 00:22:39.158 00:36:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:39.158 00:36:32 -- common/autotest_common.sh@10 -- # set +x 00:22:39.749 00:36:33 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:40.010 [2024-04-24 00:36:33.725472] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:40.010 [2024-04-24 00:36:33.725570] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:40.010 [2024-04-24 00:36:33.725670] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.010 [2024-04-24 00:36:33.725801] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.010 [2024-04-24 00:36:33.725932] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:22:40.010 00:36:33 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.010 00:36:33 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:22:40.269 00:36:33 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:22:40.269 00:36:33 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:22:40.269 00:36:33 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:22:40.269 00:36:33 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:40.269 00:36:33 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:40.527 00:36:34 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:40.527 00:36:34 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:40.527 00:36:34 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:40.785 00:36:34 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:40.785 00:36:34 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:40.785 00:36:34 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:41.043 00:36:34 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:41.043 00:36:34 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:41.043 00:36:34 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:22:41.043 00:36:34 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:41.043 00:36:34 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:41.301 [2024-04-24 00:36:34.997690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:41.301 [2024-04-24 00:36:34.997999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.301 [2024-04-24 00:36:34.998071] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:41.301 [2024-04-24 00:36:34.998179] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.301 [2024-04-24 00:36:35.000701] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.301 [2024-04-24 00:36:35.000891] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:41.301 [2024-04-24 00:36:35.001147] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:41.301 [2024-04-24 00:36:35.001269] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:41.301 pt2 00:22:41.301 00:36:35 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:41.301 00:36:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:41.301 00:36:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:41.301 00:36:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:41.301 00:36:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:41.301 00:36:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:41.301 00:36:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:41.301 00:36:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:41.301 00:36:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:41.301 00:36:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:41.301 00:36:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.301 00:36:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.560 00:36:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:41.560 "name": "raid_bdev1", 00:22:41.560 "uuid": "91f71aa2-5a94-4e62-9395-981328cd3324", 00:22:41.560 "strip_size_kb": 0, 00:22:41.560 "state": "configuring", 00:22:41.560 "raid_level": "raid1", 00:22:41.560 "superblock": true, 00:22:41.560 "num_base_bdevs": 4, 00:22:41.560 "num_base_bdevs_discovered": 1, 00:22:41.560 "num_base_bdevs_operational": 3, 00:22:41.560 "base_bdevs_list": [ 00:22:41.560 { 00:22:41.560 "name": null, 00:22:41.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.560 "is_configured": false, 00:22:41.560 "data_offset": 2048, 00:22:41.560 "data_size": 63488 00:22:41.560 }, 00:22:41.560 { 00:22:41.560 "name": "pt2", 00:22:41.560 "uuid": "20d1e7e2-3db9-5f53-999d-9fe9a249c191", 00:22:41.560 "is_configured": true, 00:22:41.560 "data_offset": 2048, 00:22:41.560 "data_size": 63488 00:22:41.560 }, 00:22:41.560 { 00:22:41.560 "name": null, 00:22:41.560 "uuid": "126f8ecf-cd38-5888-9cbb-19ca1f4aede4", 00:22:41.560 "is_configured": false, 00:22:41.560 "data_offset": 2048, 00:22:41.560 "data_size": 63488 00:22:41.560 }, 00:22:41.560 { 00:22:41.560 "name": null, 00:22:41.560 "uuid": "48130c8d-0579-5e9f-a35e-d5e5c00f0322", 00:22:41.560 "is_configured": false, 00:22:41.560 "data_offset": 2048, 00:22:41.560 "data_size": 63488 00:22:41.560 } 00:22:41.560 ] 00:22:41.560 }' 00:22:41.560 00:36:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:41.560 00:36:35 -- common/autotest_common.sh@10 -- # set +x 00:22:42.126 00:36:35 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:42.126 00:36:35 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:42.126 00:36:35 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:42.384 [2024-04-24 00:36:36.085934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:42.384 [2024-04-24 00:36:36.086235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.384 [2024-04-24 00:36:36.086318] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:42.384 [2024-04-24 00:36:36.086425] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.384 [2024-04-24 00:36:36.087055] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.384 [2024-04-24 00:36:36.087233] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:42.384 [2024-04-24 00:36:36.087454] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:42.384 [2024-04-24 00:36:36.087594] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:42.384 pt3 00:22:42.384 00:36:36 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:42.384 00:36:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:42.384 00:36:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:42.384 00:36:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:42.384 00:36:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:42.384 00:36:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:42.384 00:36:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:42.384 00:36:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:42.384 00:36:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:42.384 00:36:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:42.384 00:36:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.384 00:36:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.642 00:36:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:42.642 "name": "raid_bdev1", 00:22:42.642 "uuid": "91f71aa2-5a94-4e62-9395-981328cd3324", 00:22:42.642 "strip_size_kb": 0, 00:22:42.642 "state": "configuring", 00:22:42.642 "raid_level": "raid1", 00:22:42.642 "superblock": true, 00:22:42.642 "num_base_bdevs": 4, 00:22:42.642 "num_base_bdevs_discovered": 2, 00:22:42.642 "num_base_bdevs_operational": 3, 00:22:42.642 "base_bdevs_list": [ 00:22:42.642 { 00:22:42.642 "name": null, 00:22:42.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.642 "is_configured": false, 00:22:42.642 "data_offset": 2048, 00:22:42.642 "data_size": 63488 00:22:42.642 }, 00:22:42.642 { 00:22:42.642 "name": "pt2", 00:22:42.642 "uuid": "20d1e7e2-3db9-5f53-999d-9fe9a249c191", 00:22:42.642 "is_configured": true, 00:22:42.642 "data_offset": 2048, 00:22:42.642 "data_size": 63488 00:22:42.642 }, 00:22:42.642 { 00:22:42.642 "name": "pt3", 00:22:42.642 "uuid": "126f8ecf-cd38-5888-9cbb-19ca1f4aede4", 00:22:42.642 "is_configured": true, 00:22:42.642 "data_offset": 2048, 00:22:42.642 "data_size": 63488 00:22:42.642 }, 00:22:42.642 { 00:22:42.642 "name": null, 00:22:42.642 "uuid": "48130c8d-0579-5e9f-a35e-d5e5c00f0322", 00:22:42.642 "is_configured": false, 00:22:42.642 "data_offset": 2048, 00:22:42.642 "data_size": 63488 00:22:42.642 } 00:22:42.642 ] 00:22:42.642 }' 00:22:42.642 00:36:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:42.642 00:36:36 -- common/autotest_common.sh@10 -- # set +x 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@462 -- # i=3 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:43.576 [2024-04-24 00:36:37.262249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:43.576 [2024-04-24 00:36:37.262544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.576 [2024-04-24 00:36:37.262623] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:43.576 [2024-04-24 00:36:37.262730] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.576 [2024-04-24 00:36:37.263291] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.576 [2024-04-24 00:36:37.263441] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:43.576 [2024-04-24 00:36:37.263657] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:43.576 [2024-04-24 00:36:37.263785] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:43.576 [2024-04-24 00:36:37.263958] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:22:43.576 [2024-04-24 00:36:37.264047] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:43.576 [2024-04-24 00:36:37.264223] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:22:43.576 [2024-04-24 00:36:37.264647] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:22:43.576 [2024-04-24 00:36:37.264761] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:22:43.576 [2024-04-24 00:36:37.264989] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.576 pt4 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.576 00:36:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.834 00:36:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:43.834 "name": "raid_bdev1", 00:22:43.834 "uuid": "91f71aa2-5a94-4e62-9395-981328cd3324", 00:22:43.834 "strip_size_kb": 0, 00:22:43.834 "state": "online", 00:22:43.834 "raid_level": "raid1", 00:22:43.834 "superblock": true, 00:22:43.834 "num_base_bdevs": 4, 00:22:43.834 "num_base_bdevs_discovered": 3, 00:22:43.834 "num_base_bdevs_operational": 3, 00:22:43.834 "base_bdevs_list": [ 00:22:43.834 { 00:22:43.834 "name": null, 00:22:43.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.834 "is_configured": false, 00:22:43.834 "data_offset": 2048, 00:22:43.834 "data_size": 63488 00:22:43.834 }, 00:22:43.834 { 00:22:43.834 "name": "pt2", 00:22:43.834 "uuid": "20d1e7e2-3db9-5f53-999d-9fe9a249c191", 00:22:43.834 "is_configured": true, 00:22:43.834 "data_offset": 2048, 00:22:43.834 "data_size": 63488 00:22:43.834 }, 00:22:43.834 { 00:22:43.834 "name": "pt3", 00:22:43.834 "uuid": "126f8ecf-cd38-5888-9cbb-19ca1f4aede4", 00:22:43.834 "is_configured": true, 00:22:43.834 "data_offset": 2048, 00:22:43.834 "data_size": 63488 00:22:43.834 }, 00:22:43.834 { 00:22:43.834 "name": "pt4", 00:22:43.834 "uuid": "48130c8d-0579-5e9f-a35e-d5e5c00f0322", 00:22:43.834 "is_configured": true, 00:22:43.834 "data_offset": 2048, 00:22:43.834 "data_size": 63488 00:22:43.834 } 00:22:43.834 ] 00:22:43.834 }' 00:22:43.834 00:36:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:43.834 00:36:37 -- common/autotest_common.sh@10 -- # set +x 00:22:44.400 00:36:38 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:22:44.400 00:36:38 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:44.658 [2024-04-24 00:36:38.223750] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:44.658 [2024-04-24 00:36:38.223987] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:44.658 [2024-04-24 00:36:38.224146] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:44.658 [2024-04-24 00:36:38.224328] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:44.658 [2024-04-24 00:36:38.224417] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:22:44.658 00:36:38 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.658 00:36:38 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:22:44.916 00:36:38 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:22:44.916 00:36:38 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:22:44.916 00:36:38 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:45.173 [2024-04-24 00:36:38.711780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:45.173 [2024-04-24 00:36:38.712017] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.173 [2024-04-24 00:36:38.712173] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:45.173 [2024-04-24 00:36:38.712280] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.173 [2024-04-24 00:36:38.714866] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.173 [2024-04-24 00:36:38.715083] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:45.173 [2024-04-24 00:36:38.715321] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:45.173 [2024-04-24 00:36:38.715458] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:45.173 pt1 00:22:45.173 00:36:38 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:45.173 00:36:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:45.173 00:36:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:45.173 00:36:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:45.173 00:36:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:45.173 00:36:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:45.173 00:36:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:45.173 00:36:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:45.173 00:36:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:45.173 00:36:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:45.173 00:36:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.173 00:36:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.431 00:36:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:45.431 "name": "raid_bdev1", 00:22:45.431 "uuid": "91f71aa2-5a94-4e62-9395-981328cd3324", 00:22:45.431 "strip_size_kb": 0, 00:22:45.431 "state": "configuring", 00:22:45.431 "raid_level": "raid1", 00:22:45.431 "superblock": true, 00:22:45.431 "num_base_bdevs": 4, 00:22:45.431 "num_base_bdevs_discovered": 1, 00:22:45.431 "num_base_bdevs_operational": 4, 00:22:45.431 "base_bdevs_list": [ 00:22:45.431 { 00:22:45.431 "name": "pt1", 00:22:45.431 "uuid": "c92a5e00-68e5-5a35-b234-9dac598c3118", 00:22:45.431 "is_configured": true, 00:22:45.431 "data_offset": 2048, 00:22:45.431 "data_size": 63488 00:22:45.431 }, 00:22:45.431 { 00:22:45.431 "name": null, 00:22:45.431 "uuid": "20d1e7e2-3db9-5f53-999d-9fe9a249c191", 00:22:45.431 "is_configured": false, 00:22:45.431 "data_offset": 2048, 00:22:45.431 "data_size": 63488 00:22:45.431 }, 00:22:45.431 { 00:22:45.431 "name": null, 00:22:45.431 "uuid": "126f8ecf-cd38-5888-9cbb-19ca1f4aede4", 00:22:45.431 "is_configured": false, 00:22:45.431 "data_offset": 2048, 00:22:45.431 "data_size": 63488 00:22:45.431 }, 00:22:45.431 { 00:22:45.431 "name": null, 00:22:45.431 "uuid": "48130c8d-0579-5e9f-a35e-d5e5c00f0322", 00:22:45.431 "is_configured": false, 00:22:45.431 "data_offset": 2048, 00:22:45.431 "data_size": 63488 00:22:45.431 } 00:22:45.431 ] 00:22:45.431 }' 00:22:45.431 00:36:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:45.431 00:36:38 -- common/autotest_common.sh@10 -- # set +x 00:22:45.996 00:36:39 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:22:45.996 00:36:39 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:45.996 00:36:39 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:46.253 00:36:39 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:46.253 00:36:39 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:46.253 00:36:39 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:46.625 00:36:40 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:46.625 00:36:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:46.625 00:36:40 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:46.625 00:36:40 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:46.625 00:36:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:46.625 00:36:40 -- bdev/bdev_raid.sh@489 -- # i=3 00:22:46.625 00:36:40 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:46.886 [2024-04-24 00:36:40.472210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:46.886 [2024-04-24 00:36:40.472478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.886 [2024-04-24 00:36:40.472547] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:46.886 [2024-04-24 00:36:40.472670] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.886 [2024-04-24 00:36:40.473171] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.886 [2024-04-24 00:36:40.473333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:46.886 [2024-04-24 00:36:40.473540] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:46.886 [2024-04-24 00:36:40.473632] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:46.886 [2024-04-24 00:36:40.473704] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:46.886 [2024-04-24 00:36:40.473752] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state configuring 00:22:46.886 [2024-04-24 00:36:40.473987] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:46.886 pt4 00:22:46.886 00:36:40 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:46.886 00:36:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:46.886 00:36:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:46.886 00:36:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:46.886 00:36:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:46.886 00:36:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:46.886 00:36:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:46.886 00:36:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:46.886 00:36:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:46.886 00:36:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:46.886 00:36:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.886 00:36:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.145 00:36:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:47.145 "name": "raid_bdev1", 00:22:47.145 "uuid": "91f71aa2-5a94-4e62-9395-981328cd3324", 00:22:47.145 "strip_size_kb": 0, 00:22:47.145 "state": "configuring", 00:22:47.145 "raid_level": "raid1", 00:22:47.145 "superblock": true, 00:22:47.145 "num_base_bdevs": 4, 00:22:47.145 "num_base_bdevs_discovered": 1, 00:22:47.145 "num_base_bdevs_operational": 3, 00:22:47.145 "base_bdevs_list": [ 00:22:47.145 { 00:22:47.145 "name": null, 00:22:47.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.145 "is_configured": false, 00:22:47.145 "data_offset": 2048, 00:22:47.145 "data_size": 63488 00:22:47.145 }, 00:22:47.145 { 00:22:47.145 "name": null, 00:22:47.145 "uuid": "20d1e7e2-3db9-5f53-999d-9fe9a249c191", 00:22:47.145 "is_configured": false, 00:22:47.145 "data_offset": 2048, 00:22:47.145 "data_size": 63488 00:22:47.145 }, 00:22:47.145 { 00:22:47.145 "name": null, 00:22:47.145 "uuid": "126f8ecf-cd38-5888-9cbb-19ca1f4aede4", 00:22:47.145 "is_configured": false, 00:22:47.145 "data_offset": 2048, 00:22:47.145 "data_size": 63488 00:22:47.145 }, 00:22:47.145 { 00:22:47.145 "name": "pt4", 00:22:47.145 "uuid": "48130c8d-0579-5e9f-a35e-d5e5c00f0322", 00:22:47.145 "is_configured": true, 00:22:47.145 "data_offset": 2048, 00:22:47.145 "data_size": 63488 00:22:47.145 } 00:22:47.145 ] 00:22:47.145 }' 00:22:47.145 00:36:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:47.145 00:36:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.713 00:36:41 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:22:47.713 00:36:41 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:47.713 00:36:41 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:47.972 [2024-04-24 00:36:41.576482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:47.972 [2024-04-24 00:36:41.576754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.972 [2024-04-24 00:36:41.576822] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:47.972 [2024-04-24 00:36:41.576919] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.972 [2024-04-24 00:36:41.577415] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.972 [2024-04-24 00:36:41.577582] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:47.972 [2024-04-24 00:36:41.577779] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:47.972 [2024-04-24 00:36:41.577888] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:47.972 pt2 00:22:47.973 00:36:41 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:47.973 00:36:41 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:47.973 00:36:41 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:48.232 [2024-04-24 00:36:41.860550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:48.232 [2024-04-24 00:36:41.860791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.232 [2024-04-24 00:36:41.860856] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:48.232 [2024-04-24 00:36:41.860954] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.232 [2024-04-24 00:36:41.861460] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.232 [2024-04-24 00:36:41.861650] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:48.232 [2024-04-24 00:36:41.861896] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:48.232 [2024-04-24 00:36:41.862024] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:48.232 [2024-04-24 00:36:41.862216] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:22:48.232 [2024-04-24 00:36:41.862381] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:48.232 [2024-04-24 00:36:41.862543] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:22:48.232 [2024-04-24 00:36:41.863092] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:22:48.232 [2024-04-24 00:36:41.863236] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011f80 00:22:48.232 [2024-04-24 00:36:41.863479] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.232 pt3 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.232 00:36:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.491 00:36:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:48.491 "name": "raid_bdev1", 00:22:48.491 "uuid": "91f71aa2-5a94-4e62-9395-981328cd3324", 00:22:48.492 "strip_size_kb": 0, 00:22:48.492 "state": "online", 00:22:48.492 "raid_level": "raid1", 00:22:48.492 "superblock": true, 00:22:48.492 "num_base_bdevs": 4, 00:22:48.492 "num_base_bdevs_discovered": 3, 00:22:48.492 "num_base_bdevs_operational": 3, 00:22:48.492 "base_bdevs_list": [ 00:22:48.492 { 00:22:48.492 "name": null, 00:22:48.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.492 "is_configured": false, 00:22:48.492 "data_offset": 2048, 00:22:48.492 "data_size": 63488 00:22:48.492 }, 00:22:48.492 { 00:22:48.492 "name": "pt2", 00:22:48.492 "uuid": "20d1e7e2-3db9-5f53-999d-9fe9a249c191", 00:22:48.492 "is_configured": true, 00:22:48.492 "data_offset": 2048, 00:22:48.492 "data_size": 63488 00:22:48.492 }, 00:22:48.492 { 00:22:48.492 "name": "pt3", 00:22:48.492 "uuid": "126f8ecf-cd38-5888-9cbb-19ca1f4aede4", 00:22:48.492 "is_configured": true, 00:22:48.492 "data_offset": 2048, 00:22:48.492 "data_size": 63488 00:22:48.492 }, 00:22:48.492 { 00:22:48.492 "name": "pt4", 00:22:48.492 "uuid": "48130c8d-0579-5e9f-a35e-d5e5c00f0322", 00:22:48.492 "is_configured": true, 00:22:48.492 "data_offset": 2048, 00:22:48.492 "data_size": 63488 00:22:48.492 } 00:22:48.492 ] 00:22:48.492 }' 00:22:48.492 00:36:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:48.492 00:36:42 -- common/autotest_common.sh@10 -- # set +x 00:22:49.088 00:36:42 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:22:49.088 00:36:42 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:49.346 [2024-04-24 00:36:43.037008] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:49.347 00:36:43 -- bdev/bdev_raid.sh@506 -- # '[' 91f71aa2-5a94-4e62-9395-981328cd3324 '!=' 91f71aa2-5a94-4e62-9395-981328cd3324 ']' 00:22:49.347 00:36:43 -- bdev/bdev_raid.sh@511 -- # killprocess 130499 00:22:49.347 00:36:43 -- common/autotest_common.sh@936 -- # '[' -z 130499 ']' 00:22:49.347 00:36:43 -- common/autotest_common.sh@940 -- # kill -0 130499 00:22:49.347 00:36:43 -- common/autotest_common.sh@941 -- # uname 00:22:49.347 00:36:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:49.347 00:36:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130499 00:22:49.347 killing process with pid 130499 00:22:49.347 00:36:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:49.347 00:36:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:49.347 00:36:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130499' 00:22:49.347 00:36:43 -- common/autotest_common.sh@955 -- # kill 130499 00:22:49.347 [2024-04-24 00:36:43.085628] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:49.347 [2024-04-24 00:36:43.085696] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:49.347 00:36:43 -- common/autotest_common.sh@960 -- # wait 130499 00:22:49.347 [2024-04-24 00:36:43.085762] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:49.347 [2024-04-24 00:36:43.085771] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state offline 00:22:49.914 [2024-04-24 00:36:43.506690] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:51.291 ************************************ 00:22:51.291 END TEST raid_superblock_test 00:22:51.291 ************************************ 00:22:51.291 00:36:44 -- bdev/bdev_raid.sh@513 -- # return 0 00:22:51.291 00:22:51.291 real 0m23.931s 00:22:51.291 user 0m42.800s 00:22:51.291 sys 0m3.329s 00:22:51.291 00:36:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:51.291 00:36:44 -- common/autotest_common.sh@10 -- # set +x 00:22:51.291 00:36:44 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:22:51.291 00:36:44 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:22:51.291 00:36:44 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:22:51.291 00:36:44 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:51.291 00:36:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:51.291 00:36:44 -- common/autotest_common.sh@10 -- # set +x 00:22:51.291 ************************************ 00:22:51.291 START TEST raid_rebuild_test 00:22:51.291 ************************************ 00:22:51.291 00:36:45 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 false false 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@544 -- # raid_pid=131205 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131205 /var/tmp/spdk-raid.sock 00:22:51.291 00:36:45 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:51.291 00:36:45 -- common/autotest_common.sh@817 -- # '[' -z 131205 ']' 00:22:51.291 00:36:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:51.291 00:36:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:51.291 00:36:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:51.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:51.291 00:36:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:51.291 00:36:45 -- common/autotest_common.sh@10 -- # set +x 00:22:51.550 [2024-04-24 00:36:45.107563] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:22:51.550 [2024-04-24 00:36:45.107820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131205 ] 00:22:51.550 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:51.550 Zero copy mechanism will not be used. 00:22:51.550 [2024-04-24 00:36:45.293051] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.808 [2024-04-24 00:36:45.579773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.067 [2024-04-24 00:36:45.797598] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:52.326 00:36:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:52.326 00:36:46 -- common/autotest_common.sh@850 -- # return 0 00:22:52.326 00:36:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:52.326 00:36:46 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:52.326 00:36:46 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:52.584 BaseBdev1 00:22:52.584 00:36:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:52.584 00:36:46 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:52.584 00:36:46 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:52.842 BaseBdev2 00:22:52.842 00:36:46 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:53.122 spare_malloc 00:22:53.122 00:36:46 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:53.381 spare_delay 00:22:53.381 00:36:47 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:53.639 [2024-04-24 00:36:47.371771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:53.639 [2024-04-24 00:36:47.371877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.639 [2024-04-24 00:36:47.371942] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:22:53.639 [2024-04-24 00:36:47.371992] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.639 [2024-04-24 00:36:47.374562] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.639 [2024-04-24 00:36:47.374619] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:53.639 spare 00:22:53.639 00:36:47 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:53.898 [2024-04-24 00:36:47.579897] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:53.898 [2024-04-24 00:36:47.581965] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:53.898 [2024-04-24 00:36:47.582047] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:22:53.898 [2024-04-24 00:36:47.582058] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:53.898 [2024-04-24 00:36:47.582229] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:22:53.898 [2024-04-24 00:36:47.582564] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:22:53.898 [2024-04-24 00:36:47.582584] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:22:53.898 [2024-04-24 00:36:47.582759] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.898 00:36:47 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:53.898 00:36:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:53.898 00:36:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:53.898 00:36:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:53.898 00:36:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:53.898 00:36:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:53.898 00:36:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:53.898 00:36:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:53.898 00:36:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:53.898 00:36:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:53.898 00:36:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.898 00:36:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.156 00:36:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:54.156 "name": "raid_bdev1", 00:22:54.156 "uuid": "e6293c92-eb49-4845-bab8-aaacce2daeae", 00:22:54.156 "strip_size_kb": 0, 00:22:54.156 "state": "online", 00:22:54.156 "raid_level": "raid1", 00:22:54.156 "superblock": false, 00:22:54.156 "num_base_bdevs": 2, 00:22:54.156 "num_base_bdevs_discovered": 2, 00:22:54.156 "num_base_bdevs_operational": 2, 00:22:54.156 "base_bdevs_list": [ 00:22:54.156 { 00:22:54.156 "name": "BaseBdev1", 00:22:54.156 "uuid": "186f1adb-d9c6-4886-9a9f-7d1c1f95b92c", 00:22:54.156 "is_configured": true, 00:22:54.156 "data_offset": 0, 00:22:54.156 "data_size": 65536 00:22:54.156 }, 00:22:54.156 { 00:22:54.156 "name": "BaseBdev2", 00:22:54.157 "uuid": "eb8d5fd2-46f7-450e-bc40-a8840c841575", 00:22:54.157 "is_configured": true, 00:22:54.157 "data_offset": 0, 00:22:54.157 "data_size": 65536 00:22:54.157 } 00:22:54.157 ] 00:22:54.157 }' 00:22:54.157 00:36:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:54.157 00:36:47 -- common/autotest_common.sh@10 -- # set +x 00:22:54.723 00:36:48 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:54.723 00:36:48 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:54.981 [2024-04-24 00:36:48.588253] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.981 00:36:48 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:54.981 00:36:48 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:54.981 00:36:48 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.259 00:36:48 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:55.259 00:36:48 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:55.259 00:36:48 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:55.259 00:36:48 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:55.259 00:36:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:55.259 00:36:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:55.259 00:36:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:55.259 00:36:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:55.259 00:36:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:55.259 00:36:48 -- bdev/nbd_common.sh@12 -- # local i 00:22:55.259 00:36:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:55.259 00:36:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.259 00:36:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:55.518 [2024-04-24 00:36:49.148277] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:55.518 /dev/nbd0 00:22:55.518 00:36:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:55.518 00:36:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:55.518 00:36:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:22:55.518 00:36:49 -- common/autotest_common.sh@855 -- # local i 00:22:55.518 00:36:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:55.518 00:36:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:55.518 00:36:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:22:55.518 00:36:49 -- common/autotest_common.sh@859 -- # break 00:22:55.518 00:36:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:55.518 00:36:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:55.518 00:36:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:55.518 1+0 records in 00:22:55.518 1+0 records out 00:22:55.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000996636 s, 4.1 MB/s 00:22:55.518 00:36:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.518 00:36:49 -- common/autotest_common.sh@872 -- # size=4096 00:22:55.518 00:36:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:55.518 00:36:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:55.518 00:36:49 -- common/autotest_common.sh@875 -- # return 0 00:22:55.518 00:36:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:55.518 00:36:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.518 00:36:49 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:55.518 00:36:49 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:55.518 00:36:49 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:22:59.703 65536+0 records in 00:22:59.703 65536+0 records out 00:22:59.703 33554432 bytes (34 MB, 32 MiB) copied, 4.16814 s, 8.1 MB/s 00:22:59.703 00:36:53 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:59.703 00:36:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:59.703 00:36:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:59.703 00:36:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:59.703 00:36:53 -- bdev/nbd_common.sh@51 -- # local i 00:22:59.703 00:36:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:59.703 00:36:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:59.962 [2024-04-24 00:36:53.672109] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.962 00:36:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:59.962 00:36:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:59.962 00:36:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:59.962 00:36:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:59.962 00:36:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:59.962 00:36:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:59.962 00:36:53 -- bdev/nbd_common.sh@41 -- # break 00:22:59.962 00:36:53 -- bdev/nbd_common.sh@45 -- # return 0 00:22:59.962 00:36:53 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:00.220 [2024-04-24 00:36:53.951782] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:00.220 00:36:53 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:00.220 00:36:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:00.220 00:36:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:00.220 00:36:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:00.220 00:36:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:00.220 00:36:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:00.220 00:36:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:00.220 00:36:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:00.220 00:36:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:00.220 00:36:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:00.220 00:36:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.220 00:36:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.478 00:36:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.478 "name": "raid_bdev1", 00:23:00.478 "uuid": "e6293c92-eb49-4845-bab8-aaacce2daeae", 00:23:00.478 "strip_size_kb": 0, 00:23:00.478 "state": "online", 00:23:00.478 "raid_level": "raid1", 00:23:00.478 "superblock": false, 00:23:00.478 "num_base_bdevs": 2, 00:23:00.478 "num_base_bdevs_discovered": 1, 00:23:00.478 "num_base_bdevs_operational": 1, 00:23:00.478 "base_bdevs_list": [ 00:23:00.478 { 00:23:00.478 "name": null, 00:23:00.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.478 "is_configured": false, 00:23:00.478 "data_offset": 0, 00:23:00.478 "data_size": 65536 00:23:00.478 }, 00:23:00.478 { 00:23:00.478 "name": "BaseBdev2", 00:23:00.478 "uuid": "eb8d5fd2-46f7-450e-bc40-a8840c841575", 00:23:00.478 "is_configured": true, 00:23:00.478 "data_offset": 0, 00:23:00.478 "data_size": 65536 00:23:00.478 } 00:23:00.478 ] 00:23:00.478 }' 00:23:00.478 00:36:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.478 00:36:54 -- common/autotest_common.sh@10 -- # set +x 00:23:01.412 00:36:54 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:01.412 [2024-04-24 00:36:55.096088] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:01.412 [2024-04-24 00:36:55.096413] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:01.412 [2024-04-24 00:36:55.119627] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09550 00:23:01.412 [2024-04-24 00:36:55.122731] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:01.412 00:36:55 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:02.787 00:36:56 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:02.787 00:36:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:02.787 00:36:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:02.787 00:36:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:02.787 00:36:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:02.787 00:36:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.787 00:36:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.787 00:36:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:02.787 "name": "raid_bdev1", 00:23:02.787 "uuid": "e6293c92-eb49-4845-bab8-aaacce2daeae", 00:23:02.787 "strip_size_kb": 0, 00:23:02.787 "state": "online", 00:23:02.787 "raid_level": "raid1", 00:23:02.787 "superblock": false, 00:23:02.787 "num_base_bdevs": 2, 00:23:02.787 "num_base_bdevs_discovered": 2, 00:23:02.787 "num_base_bdevs_operational": 2, 00:23:02.787 "process": { 00:23:02.787 "type": "rebuild", 00:23:02.787 "target": "spare", 00:23:02.787 "progress": { 00:23:02.787 "blocks": 24576, 00:23:02.787 "percent": 37 00:23:02.787 } 00:23:02.787 }, 00:23:02.787 "base_bdevs_list": [ 00:23:02.787 { 00:23:02.787 "name": "spare", 00:23:02.787 "uuid": "57a32a3f-dd38-5a91-af97-4b0c9d6111b1", 00:23:02.787 "is_configured": true, 00:23:02.787 "data_offset": 0, 00:23:02.787 "data_size": 65536 00:23:02.787 }, 00:23:02.787 { 00:23:02.787 "name": "BaseBdev2", 00:23:02.787 "uuid": "eb8d5fd2-46f7-450e-bc40-a8840c841575", 00:23:02.787 "is_configured": true, 00:23:02.787 "data_offset": 0, 00:23:02.787 "data_size": 65536 00:23:02.787 } 00:23:02.787 ] 00:23:02.787 }' 00:23:02.787 00:36:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:02.787 00:36:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:02.787 00:36:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:02.787 00:36:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:02.787 00:36:56 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:03.045 [2024-04-24 00:36:56.752928] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:03.045 [2024-04-24 00:36:56.834140] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:03.045 [2024-04-24 00:36:56.834449] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.301 00:36:56 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:03.301 00:36:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:03.301 00:36:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:03.301 00:36:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:03.301 00:36:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:03.301 00:36:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:03.301 00:36:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:03.301 00:36:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:03.301 00:36:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:03.301 00:36:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:03.301 00:36:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.301 00:36:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.558 00:36:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.558 "name": "raid_bdev1", 00:23:03.558 "uuid": "e6293c92-eb49-4845-bab8-aaacce2daeae", 00:23:03.558 "strip_size_kb": 0, 00:23:03.558 "state": "online", 00:23:03.558 "raid_level": "raid1", 00:23:03.558 "superblock": false, 00:23:03.558 "num_base_bdevs": 2, 00:23:03.558 "num_base_bdevs_discovered": 1, 00:23:03.558 "num_base_bdevs_operational": 1, 00:23:03.558 "base_bdevs_list": [ 00:23:03.558 { 00:23:03.558 "name": null, 00:23:03.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.558 "is_configured": false, 00:23:03.558 "data_offset": 0, 00:23:03.558 "data_size": 65536 00:23:03.558 }, 00:23:03.558 { 00:23:03.558 "name": "BaseBdev2", 00:23:03.558 "uuid": "eb8d5fd2-46f7-450e-bc40-a8840c841575", 00:23:03.558 "is_configured": true, 00:23:03.558 "data_offset": 0, 00:23:03.558 "data_size": 65536 00:23:03.558 } 00:23:03.558 ] 00:23:03.558 }' 00:23:03.558 00:36:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.558 00:36:57 -- common/autotest_common.sh@10 -- # set +x 00:23:04.123 00:36:57 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:04.123 00:36:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:04.123 00:36:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:04.123 00:36:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:04.123 00:36:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:04.123 00:36:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.123 00:36:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.689 00:36:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:04.689 "name": "raid_bdev1", 00:23:04.689 "uuid": "e6293c92-eb49-4845-bab8-aaacce2daeae", 00:23:04.689 "strip_size_kb": 0, 00:23:04.689 "state": "online", 00:23:04.689 "raid_level": "raid1", 00:23:04.689 "superblock": false, 00:23:04.689 "num_base_bdevs": 2, 00:23:04.689 "num_base_bdevs_discovered": 1, 00:23:04.689 "num_base_bdevs_operational": 1, 00:23:04.689 "base_bdevs_list": [ 00:23:04.689 { 00:23:04.689 "name": null, 00:23:04.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.689 "is_configured": false, 00:23:04.689 "data_offset": 0, 00:23:04.689 "data_size": 65536 00:23:04.689 }, 00:23:04.689 { 00:23:04.689 "name": "BaseBdev2", 00:23:04.689 "uuid": "eb8d5fd2-46f7-450e-bc40-a8840c841575", 00:23:04.689 "is_configured": true, 00:23:04.689 "data_offset": 0, 00:23:04.689 "data_size": 65536 00:23:04.689 } 00:23:04.689 ] 00:23:04.689 }' 00:23:04.689 00:36:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:04.689 00:36:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:04.689 00:36:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:04.689 00:36:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:04.689 00:36:58 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:04.947 [2024-04-24 00:36:58.553820] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:04.947 [2024-04-24 00:36:58.554125] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:04.947 [2024-04-24 00:36:58.570347] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:23:04.947 [2024-04-24 00:36:58.572835] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:04.947 00:36:58 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:05.881 00:36:59 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:05.881 00:36:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:05.881 00:36:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:05.881 00:36:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:05.881 00:36:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:05.881 00:36:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.881 00:36:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.140 00:36:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:06.140 "name": "raid_bdev1", 00:23:06.140 "uuid": "e6293c92-eb49-4845-bab8-aaacce2daeae", 00:23:06.140 "strip_size_kb": 0, 00:23:06.140 "state": "online", 00:23:06.140 "raid_level": "raid1", 00:23:06.140 "superblock": false, 00:23:06.140 "num_base_bdevs": 2, 00:23:06.140 "num_base_bdevs_discovered": 2, 00:23:06.140 "num_base_bdevs_operational": 2, 00:23:06.140 "process": { 00:23:06.140 "type": "rebuild", 00:23:06.140 "target": "spare", 00:23:06.140 "progress": { 00:23:06.140 "blocks": 26624, 00:23:06.140 "percent": 40 00:23:06.140 } 00:23:06.140 }, 00:23:06.140 "base_bdevs_list": [ 00:23:06.140 { 00:23:06.140 "name": "spare", 00:23:06.140 "uuid": "57a32a3f-dd38-5a91-af97-4b0c9d6111b1", 00:23:06.140 "is_configured": true, 00:23:06.140 "data_offset": 0, 00:23:06.140 "data_size": 65536 00:23:06.140 }, 00:23:06.140 { 00:23:06.140 "name": "BaseBdev2", 00:23:06.140 "uuid": "eb8d5fd2-46f7-450e-bc40-a8840c841575", 00:23:06.140 "is_configured": true, 00:23:06.140 "data_offset": 0, 00:23:06.140 "data_size": 65536 00:23:06.140 } 00:23:06.140 ] 00:23:06.140 }' 00:23:06.140 00:36:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:06.398 00:36:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:06.398 00:36:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@657 -- # local timeout=443 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.398 00:37:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.655 00:37:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:06.655 "name": "raid_bdev1", 00:23:06.655 "uuid": "e6293c92-eb49-4845-bab8-aaacce2daeae", 00:23:06.655 "strip_size_kb": 0, 00:23:06.655 "state": "online", 00:23:06.655 "raid_level": "raid1", 00:23:06.655 "superblock": false, 00:23:06.655 "num_base_bdevs": 2, 00:23:06.655 "num_base_bdevs_discovered": 2, 00:23:06.655 "num_base_bdevs_operational": 2, 00:23:06.655 "process": { 00:23:06.655 "type": "rebuild", 00:23:06.655 "target": "spare", 00:23:06.655 "progress": { 00:23:06.655 "blocks": 32768, 00:23:06.655 "percent": 50 00:23:06.655 } 00:23:06.655 }, 00:23:06.655 "base_bdevs_list": [ 00:23:06.655 { 00:23:06.655 "name": "spare", 00:23:06.655 "uuid": "57a32a3f-dd38-5a91-af97-4b0c9d6111b1", 00:23:06.655 "is_configured": true, 00:23:06.655 "data_offset": 0, 00:23:06.655 "data_size": 65536 00:23:06.655 }, 00:23:06.655 { 00:23:06.655 "name": "BaseBdev2", 00:23:06.655 "uuid": "eb8d5fd2-46f7-450e-bc40-a8840c841575", 00:23:06.655 "is_configured": true, 00:23:06.655 "data_offset": 0, 00:23:06.655 "data_size": 65536 00:23:06.655 } 00:23:06.655 ] 00:23:06.655 }' 00:23:06.655 00:37:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:06.655 00:37:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:06.655 00:37:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:06.655 00:37:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.655 00:37:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:07.589 00:37:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:07.589 00:37:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:07.589 00:37:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:07.589 00:37:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:07.589 00:37:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:07.589 00:37:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:07.589 00:37:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.589 00:37:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.155 00:37:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:08.155 "name": "raid_bdev1", 00:23:08.155 "uuid": "e6293c92-eb49-4845-bab8-aaacce2daeae", 00:23:08.155 "strip_size_kb": 0, 00:23:08.155 "state": "online", 00:23:08.155 "raid_level": "raid1", 00:23:08.155 "superblock": false, 00:23:08.155 "num_base_bdevs": 2, 00:23:08.155 "num_base_bdevs_discovered": 2, 00:23:08.155 "num_base_bdevs_operational": 2, 00:23:08.155 "process": { 00:23:08.155 "type": "rebuild", 00:23:08.155 "target": "spare", 00:23:08.155 "progress": { 00:23:08.155 "blocks": 61440, 00:23:08.155 "percent": 93 00:23:08.155 } 00:23:08.155 }, 00:23:08.155 "base_bdevs_list": [ 00:23:08.155 { 00:23:08.155 "name": "spare", 00:23:08.155 "uuid": "57a32a3f-dd38-5a91-af97-4b0c9d6111b1", 00:23:08.155 "is_configured": true, 00:23:08.155 "data_offset": 0, 00:23:08.155 "data_size": 65536 00:23:08.155 }, 00:23:08.155 { 00:23:08.155 "name": "BaseBdev2", 00:23:08.155 "uuid": "eb8d5fd2-46f7-450e-bc40-a8840c841575", 00:23:08.155 "is_configured": true, 00:23:08.155 "data_offset": 0, 00:23:08.155 "data_size": 65536 00:23:08.155 } 00:23:08.155 ] 00:23:08.155 }' 00:23:08.155 00:37:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:08.155 00:37:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:08.155 00:37:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:08.155 00:37:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:08.155 00:37:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:08.155 [2024-04-24 00:37:01.793667] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:08.155 [2024-04-24 00:37:01.794101] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:08.155 [2024-04-24 00:37:01.794412] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.090 00:37:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:09.090 00:37:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:09.090 00:37:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:09.090 00:37:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:09.090 00:37:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:09.090 00:37:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:09.090 00:37:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.090 00:37:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.348 00:37:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:09.348 "name": "raid_bdev1", 00:23:09.349 "uuid": "e6293c92-eb49-4845-bab8-aaacce2daeae", 00:23:09.349 "strip_size_kb": 0, 00:23:09.349 "state": "online", 00:23:09.349 "raid_level": "raid1", 00:23:09.349 "superblock": false, 00:23:09.349 "num_base_bdevs": 2, 00:23:09.349 "num_base_bdevs_discovered": 2, 00:23:09.349 "num_base_bdevs_operational": 2, 00:23:09.349 "base_bdevs_list": [ 00:23:09.349 { 00:23:09.349 "name": "spare", 00:23:09.349 "uuid": "57a32a3f-dd38-5a91-af97-4b0c9d6111b1", 00:23:09.349 "is_configured": true, 00:23:09.349 "data_offset": 0, 00:23:09.349 "data_size": 65536 00:23:09.349 }, 00:23:09.349 { 00:23:09.349 "name": "BaseBdev2", 00:23:09.349 "uuid": "eb8d5fd2-46f7-450e-bc40-a8840c841575", 00:23:09.349 "is_configured": true, 00:23:09.349 "data_offset": 0, 00:23:09.349 "data_size": 65536 00:23:09.349 } 00:23:09.349 ] 00:23:09.349 }' 00:23:09.349 00:37:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:09.349 00:37:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:09.349 00:37:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:09.349 00:37:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:09.349 00:37:03 -- bdev/bdev_raid.sh@660 -- # break 00:23:09.349 00:37:03 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:09.349 00:37:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:09.349 00:37:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:09.349 00:37:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:09.349 00:37:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:09.349 00:37:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.349 00:37:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.607 00:37:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:09.607 "name": "raid_bdev1", 00:23:09.607 "uuid": "e6293c92-eb49-4845-bab8-aaacce2daeae", 00:23:09.607 "strip_size_kb": 0, 00:23:09.607 "state": "online", 00:23:09.607 "raid_level": "raid1", 00:23:09.607 "superblock": false, 00:23:09.607 "num_base_bdevs": 2, 00:23:09.607 "num_base_bdevs_discovered": 2, 00:23:09.607 "num_base_bdevs_operational": 2, 00:23:09.607 "base_bdevs_list": [ 00:23:09.607 { 00:23:09.607 "name": "spare", 00:23:09.607 "uuid": "57a32a3f-dd38-5a91-af97-4b0c9d6111b1", 00:23:09.607 "is_configured": true, 00:23:09.607 "data_offset": 0, 00:23:09.607 "data_size": 65536 00:23:09.607 }, 00:23:09.607 { 00:23:09.607 "name": "BaseBdev2", 00:23:09.607 "uuid": "eb8d5fd2-46f7-450e-bc40-a8840c841575", 00:23:09.607 "is_configured": true, 00:23:09.607 "data_offset": 0, 00:23:09.607 "data_size": 65536 00:23:09.607 } 00:23:09.607 ] 00:23:09.607 }' 00:23:09.607 00:37:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.865 00:37:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.123 00:37:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:10.123 "name": "raid_bdev1", 00:23:10.123 "uuid": "e6293c92-eb49-4845-bab8-aaacce2daeae", 00:23:10.123 "strip_size_kb": 0, 00:23:10.123 "state": "online", 00:23:10.123 "raid_level": "raid1", 00:23:10.123 "superblock": false, 00:23:10.123 "num_base_bdevs": 2, 00:23:10.123 "num_base_bdevs_discovered": 2, 00:23:10.123 "num_base_bdevs_operational": 2, 00:23:10.123 "base_bdevs_list": [ 00:23:10.123 { 00:23:10.123 "name": "spare", 00:23:10.123 "uuid": "57a32a3f-dd38-5a91-af97-4b0c9d6111b1", 00:23:10.123 "is_configured": true, 00:23:10.123 "data_offset": 0, 00:23:10.123 "data_size": 65536 00:23:10.123 }, 00:23:10.123 { 00:23:10.123 "name": "BaseBdev2", 00:23:10.123 "uuid": "eb8d5fd2-46f7-450e-bc40-a8840c841575", 00:23:10.123 "is_configured": true, 00:23:10.123 "data_offset": 0, 00:23:10.123 "data_size": 65536 00:23:10.123 } 00:23:10.123 ] 00:23:10.123 }' 00:23:10.123 00:37:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:10.123 00:37:03 -- common/autotest_common.sh@10 -- # set +x 00:23:10.688 00:37:04 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:10.946 [2024-04-24 00:37:04.566695] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:10.946 [2024-04-24 00:37:04.566911] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:10.946 [2024-04-24 00:37:04.567145] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:10.946 [2024-04-24 00:37:04.567389] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:10.946 [2024-04-24 00:37:04.567509] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:23:10.946 00:37:04 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:10.946 00:37:04 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.204 00:37:04 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:11.204 00:37:04 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:11.204 00:37:04 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:11.204 00:37:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:11.204 00:37:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:11.204 00:37:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:11.204 00:37:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:11.204 00:37:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:11.204 00:37:04 -- bdev/nbd_common.sh@12 -- # local i 00:23:11.204 00:37:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:11.204 00:37:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:11.204 00:37:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:11.462 /dev/nbd0 00:23:11.462 00:37:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:11.462 00:37:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:11.462 00:37:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:11.462 00:37:05 -- common/autotest_common.sh@855 -- # local i 00:23:11.462 00:37:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:11.462 00:37:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:11.462 00:37:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:11.462 00:37:05 -- common/autotest_common.sh@859 -- # break 00:23:11.462 00:37:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:11.462 00:37:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:11.462 00:37:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:11.462 1+0 records in 00:23:11.462 1+0 records out 00:23:11.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481187 s, 8.5 MB/s 00:23:11.462 00:37:05 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.462 00:37:05 -- common/autotest_common.sh@872 -- # size=4096 00:23:11.462 00:37:05 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.462 00:37:05 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:11.462 00:37:05 -- common/autotest_common.sh@875 -- # return 0 00:23:11.462 00:37:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:11.462 00:37:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:11.462 00:37:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:11.721 /dev/nbd1 00:23:11.721 00:37:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:11.721 00:37:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:11.721 00:37:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:23:11.721 00:37:05 -- common/autotest_common.sh@855 -- # local i 00:23:11.721 00:37:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:11.721 00:37:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:11.721 00:37:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:23:11.721 00:37:05 -- common/autotest_common.sh@859 -- # break 00:23:11.721 00:37:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:11.721 00:37:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:11.721 00:37:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:11.721 1+0 records in 00:23:11.721 1+0 records out 00:23:11.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000819416 s, 5.0 MB/s 00:23:11.721 00:37:05 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.721 00:37:05 -- common/autotest_common.sh@872 -- # size=4096 00:23:11.721 00:37:05 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:11.721 00:37:05 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:11.721 00:37:05 -- common/autotest_common.sh@875 -- # return 0 00:23:11.721 00:37:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:11.721 00:37:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:11.721 00:37:05 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:11.979 00:37:05 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:11.979 00:37:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:11.979 00:37:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:11.979 00:37:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:11.979 00:37:05 -- bdev/nbd_common.sh@51 -- # local i 00:23:11.979 00:37:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:11.979 00:37:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:12.237 00:37:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:12.237 00:37:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:12.237 00:37:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:12.237 00:37:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.237 00:37:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.237 00:37:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:12.237 00:37:05 -- bdev/nbd_common.sh@41 -- # break 00:23:12.237 00:37:05 -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.237 00:37:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:12.237 00:37:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:12.495 00:37:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:12.495 00:37:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:12.495 00:37:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:12.495 00:37:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:12.495 00:37:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:12.495 00:37:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:12.495 00:37:06 -- bdev/nbd_common.sh@41 -- # break 00:23:12.495 00:37:06 -- bdev/nbd_common.sh@45 -- # return 0 00:23:12.495 00:37:06 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:12.495 00:37:06 -- bdev/bdev_raid.sh@709 -- # killprocess 131205 00:23:12.495 00:37:06 -- common/autotest_common.sh@936 -- # '[' -z 131205 ']' 00:23:12.495 00:37:06 -- common/autotest_common.sh@940 -- # kill -0 131205 00:23:12.495 00:37:06 -- common/autotest_common.sh@941 -- # uname 00:23:12.495 00:37:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:12.495 00:37:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131205 00:23:12.752 00:37:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:12.752 00:37:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:12.752 00:37:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131205' 00:23:12.752 killing process with pid 131205 00:23:12.752 00:37:06 -- common/autotest_common.sh@955 -- # kill 131205 00:23:12.752 Received shutdown signal, test time was about 60.000000 seconds 00:23:12.752 00:23:12.752 Latency(us) 00:23:12.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.752 =================================================================================================================== 00:23:12.752 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:12.752 00:37:06 -- common/autotest_common.sh@960 -- # wait 131205 00:23:12.752 [2024-04-24 00:37:06.303462] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:13.010 [2024-04-24 00:37:06.617441] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:14.384 ************************************ 00:23:14.384 END TEST raid_rebuild_test 00:23:14.384 ************************************ 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:14.385 00:23:14.385 real 0m23.015s 00:23:14.385 user 0m31.599s 00:23:14.385 sys 0m4.273s 00:23:14.385 00:37:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:14.385 00:37:08 -- common/autotest_common.sh@10 -- # set +x 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:23:14.385 00:37:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:14.385 00:37:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:14.385 00:37:08 -- common/autotest_common.sh@10 -- # set +x 00:23:14.385 ************************************ 00:23:14.385 START TEST raid_rebuild_test_sb 00:23:14.385 ************************************ 00:23:14.385 00:37:08 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 true false 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@544 -- # raid_pid=131758 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:14.385 00:37:08 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131758 /var/tmp/spdk-raid.sock 00:23:14.385 00:37:08 -- common/autotest_common.sh@817 -- # '[' -z 131758 ']' 00:23:14.385 00:37:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:14.385 00:37:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:14.385 00:37:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:14.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:14.385 00:37:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:14.385 00:37:08 -- common/autotest_common.sh@10 -- # set +x 00:23:14.643 [2024-04-24 00:37:08.237299] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:23:14.643 [2024-04-24 00:37:08.237721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131758 ] 00:23:14.643 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:14.643 Zero copy mechanism will not be used. 00:23:14.643 [2024-04-24 00:37:08.419095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.900 [2024-04-24 00:37:08.640030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.158 [2024-04-24 00:37:08.858836] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:15.727 00:37:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:15.727 00:37:09 -- common/autotest_common.sh@850 -- # return 0 00:23:15.727 00:37:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:15.727 00:37:09 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:15.727 00:37:09 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:16.018 BaseBdev1_malloc 00:23:16.018 00:37:09 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:16.018 [2024-04-24 00:37:09.773192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:16.018 [2024-04-24 00:37:09.774107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:16.018 [2024-04-24 00:37:09.774442] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:23:16.018 [2024-04-24 00:37:09.774771] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:16.018 [2024-04-24 00:37:09.777725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:16.018 [2024-04-24 00:37:09.778090] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:16.018 BaseBdev1 00:23:16.018 00:37:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:16.018 00:37:09 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:16.018 00:37:09 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:16.584 BaseBdev2_malloc 00:23:16.584 00:37:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:16.584 [2024-04-24 00:37:10.348693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:16.584 [2024-04-24 00:37:10.349278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:16.584 [2024-04-24 00:37:10.349649] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:16.584 [2024-04-24 00:37:10.350024] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:16.584 [2024-04-24 00:37:10.353238] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:16.584 [2024-04-24 00:37:10.353585] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:16.584 BaseBdev2 00:23:16.584 00:37:10 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:16.842 spare_malloc 00:23:16.842 00:37:10 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:17.100 spare_delay 00:23:17.358 00:37:10 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:17.616 [2024-04-24 00:37:11.184412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:17.616 [2024-04-24 00:37:11.185201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.616 [2024-04-24 00:37:11.185550] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:23:17.616 [2024-04-24 00:37:11.185865] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.616 [2024-04-24 00:37:11.189199] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.616 [2024-04-24 00:37:11.189526] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:17.616 spare 00:23:17.616 00:37:11 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:17.616 [2024-04-24 00:37:11.402132] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:17.616 [2024-04-24 00:37:11.404570] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:17.616 [2024-04-24 00:37:11.404929] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:23:17.616 [2024-04-24 00:37:11.405043] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:17.616 [2024-04-24 00:37:11.405226] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:17.616 [2024-04-24 00:37:11.405742] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:23:17.616 [2024-04-24 00:37:11.405864] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:23:17.616 [2024-04-24 00:37:11.406217] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.874 00:37:11 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:17.874 00:37:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:17.874 00:37:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:17.874 00:37:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:17.874 00:37:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:17.874 00:37:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:17.874 00:37:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:17.874 00:37:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:17.874 00:37:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:17.874 00:37:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:17.874 00:37:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.874 00:37:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.133 00:37:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:18.133 "name": "raid_bdev1", 00:23:18.133 "uuid": "c25f018d-6015-410a-a017-92c610a2e038", 00:23:18.133 "strip_size_kb": 0, 00:23:18.133 "state": "online", 00:23:18.133 "raid_level": "raid1", 00:23:18.133 "superblock": true, 00:23:18.133 "num_base_bdevs": 2, 00:23:18.133 "num_base_bdevs_discovered": 2, 00:23:18.133 "num_base_bdevs_operational": 2, 00:23:18.133 "base_bdevs_list": [ 00:23:18.133 { 00:23:18.133 "name": "BaseBdev1", 00:23:18.133 "uuid": "1498cd07-cc31-53b8-be68-392ce09d1fc0", 00:23:18.133 "is_configured": true, 00:23:18.133 "data_offset": 2048, 00:23:18.133 "data_size": 63488 00:23:18.133 }, 00:23:18.133 { 00:23:18.133 "name": "BaseBdev2", 00:23:18.133 "uuid": "12c7e2ec-d022-5652-ba3b-40d2c70192c8", 00:23:18.133 "is_configured": true, 00:23:18.133 "data_offset": 2048, 00:23:18.133 "data_size": 63488 00:23:18.133 } 00:23:18.133 ] 00:23:18.133 }' 00:23:18.133 00:37:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:18.133 00:37:11 -- common/autotest_common.sh@10 -- # set +x 00:23:18.699 00:37:12 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:18.699 00:37:12 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:18.957 [2024-04-24 00:37:12.722728] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:18.957 00:37:12 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:23:18.957 00:37:12 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:18.957 00:37:12 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.522 00:37:13 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:19.522 00:37:13 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:19.522 00:37:13 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:19.522 00:37:13 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:19.522 00:37:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:19.522 00:37:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:19.522 00:37:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:19.522 00:37:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:19.522 00:37:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:19.522 00:37:13 -- bdev/nbd_common.sh@12 -- # local i 00:23:19.522 00:37:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:19.522 00:37:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:19.522 00:37:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:19.522 [2024-04-24 00:37:13.306652] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:19.780 /dev/nbd0 00:23:19.780 00:37:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:19.780 00:37:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:19.780 00:37:13 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:19.780 00:37:13 -- common/autotest_common.sh@855 -- # local i 00:23:19.780 00:37:13 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:19.780 00:37:13 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:19.780 00:37:13 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:19.780 00:37:13 -- common/autotest_common.sh@859 -- # break 00:23:19.780 00:37:13 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:19.780 00:37:13 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:19.780 00:37:13 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:19.780 1+0 records in 00:23:19.780 1+0 records out 00:23:19.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341303 s, 12.0 MB/s 00:23:19.780 00:37:13 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:19.780 00:37:13 -- common/autotest_common.sh@872 -- # size=4096 00:23:19.780 00:37:13 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:19.780 00:37:13 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:19.780 00:37:13 -- common/autotest_common.sh@875 -- # return 0 00:23:19.780 00:37:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:19.780 00:37:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:19.780 00:37:13 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:23:19.780 00:37:13 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:23:19.780 00:37:13 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:26.424 63488+0 records in 00:23:26.424 63488+0 records out 00:23:26.424 32505856 bytes (33 MB, 31 MiB) copied, 5.71088 s, 5.7 MB/s 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@51 -- # local i 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:26.424 [2024-04-24 00:37:19.306187] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@41 -- # break 00:23:26.424 00:37:19 -- bdev/nbd_common.sh@45 -- # return 0 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:26.424 [2024-04-24 00:37:19.582009] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:26.424 "name": "raid_bdev1", 00:23:26.424 "uuid": "c25f018d-6015-410a-a017-92c610a2e038", 00:23:26.424 "strip_size_kb": 0, 00:23:26.424 "state": "online", 00:23:26.424 "raid_level": "raid1", 00:23:26.424 "superblock": true, 00:23:26.424 "num_base_bdevs": 2, 00:23:26.424 "num_base_bdevs_discovered": 1, 00:23:26.424 "num_base_bdevs_operational": 1, 00:23:26.424 "base_bdevs_list": [ 00:23:26.424 { 00:23:26.424 "name": null, 00:23:26.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.424 "is_configured": false, 00:23:26.424 "data_offset": 2048, 00:23:26.424 "data_size": 63488 00:23:26.424 }, 00:23:26.424 { 00:23:26.424 "name": "BaseBdev2", 00:23:26.424 "uuid": "12c7e2ec-d022-5652-ba3b-40d2c70192c8", 00:23:26.424 "is_configured": true, 00:23:26.424 "data_offset": 2048, 00:23:26.424 "data_size": 63488 00:23:26.424 } 00:23:26.424 ] 00:23:26.424 }' 00:23:26.424 00:37:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:26.424 00:37:19 -- common/autotest_common.sh@10 -- # set +x 00:23:26.994 00:37:20 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:27.252 [2024-04-24 00:37:20.834298] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:27.252 [2024-04-24 00:37:20.834355] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:27.252 [2024-04-24 00:37:20.852993] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca2e80 00:23:27.252 [2024-04-24 00:37:20.855262] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:27.252 00:37:20 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:28.186 00:37:21 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:28.186 00:37:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:28.186 00:37:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:28.186 00:37:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:28.186 00:37:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:28.186 00:37:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.186 00:37:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.444 00:37:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:28.444 "name": "raid_bdev1", 00:23:28.444 "uuid": "c25f018d-6015-410a-a017-92c610a2e038", 00:23:28.444 "strip_size_kb": 0, 00:23:28.444 "state": "online", 00:23:28.444 "raid_level": "raid1", 00:23:28.444 "superblock": true, 00:23:28.444 "num_base_bdevs": 2, 00:23:28.444 "num_base_bdevs_discovered": 2, 00:23:28.444 "num_base_bdevs_operational": 2, 00:23:28.444 "process": { 00:23:28.444 "type": "rebuild", 00:23:28.444 "target": "spare", 00:23:28.444 "progress": { 00:23:28.444 "blocks": 26624, 00:23:28.444 "percent": 41 00:23:28.444 } 00:23:28.444 }, 00:23:28.444 "base_bdevs_list": [ 00:23:28.444 { 00:23:28.444 "name": "spare", 00:23:28.444 "uuid": "1cc159a4-5d77-583c-8f60-396e3abf402d", 00:23:28.444 "is_configured": true, 00:23:28.444 "data_offset": 2048, 00:23:28.444 "data_size": 63488 00:23:28.444 }, 00:23:28.444 { 00:23:28.444 "name": "BaseBdev2", 00:23:28.444 "uuid": "12c7e2ec-d022-5652-ba3b-40d2c70192c8", 00:23:28.444 "is_configured": true, 00:23:28.444 "data_offset": 2048, 00:23:28.444 "data_size": 63488 00:23:28.444 } 00:23:28.444 ] 00:23:28.444 }' 00:23:28.444 00:37:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:28.702 00:37:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:28.702 00:37:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:28.702 00:37:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:28.702 00:37:22 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:28.702 [2024-04-24 00:37:22.480609] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:28.963 [2024-04-24 00:37:22.566469] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:28.963 [2024-04-24 00:37:22.566588] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.963 00:37:22 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:28.963 00:37:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:28.963 00:37:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:28.963 00:37:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:28.963 00:37:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:28.963 00:37:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:28.963 00:37:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:28.963 00:37:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:28.963 00:37:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:28.963 00:37:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:28.963 00:37:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.963 00:37:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.224 00:37:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.224 "name": "raid_bdev1", 00:23:29.224 "uuid": "c25f018d-6015-410a-a017-92c610a2e038", 00:23:29.224 "strip_size_kb": 0, 00:23:29.224 "state": "online", 00:23:29.224 "raid_level": "raid1", 00:23:29.224 "superblock": true, 00:23:29.224 "num_base_bdevs": 2, 00:23:29.224 "num_base_bdevs_discovered": 1, 00:23:29.224 "num_base_bdevs_operational": 1, 00:23:29.224 "base_bdevs_list": [ 00:23:29.224 { 00:23:29.224 "name": null, 00:23:29.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.224 "is_configured": false, 00:23:29.225 "data_offset": 2048, 00:23:29.225 "data_size": 63488 00:23:29.225 }, 00:23:29.225 { 00:23:29.225 "name": "BaseBdev2", 00:23:29.225 "uuid": "12c7e2ec-d022-5652-ba3b-40d2c70192c8", 00:23:29.225 "is_configured": true, 00:23:29.225 "data_offset": 2048, 00:23:29.225 "data_size": 63488 00:23:29.225 } 00:23:29.225 ] 00:23:29.225 }' 00:23:29.225 00:37:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.225 00:37:22 -- common/autotest_common.sh@10 -- # set +x 00:23:29.790 00:37:23 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:29.790 00:37:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:29.790 00:37:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:29.790 00:37:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:29.790 00:37:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:29.790 00:37:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.790 00:37:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.047 00:37:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:30.047 "name": "raid_bdev1", 00:23:30.047 "uuid": "c25f018d-6015-410a-a017-92c610a2e038", 00:23:30.047 "strip_size_kb": 0, 00:23:30.047 "state": "online", 00:23:30.047 "raid_level": "raid1", 00:23:30.047 "superblock": true, 00:23:30.047 "num_base_bdevs": 2, 00:23:30.047 "num_base_bdevs_discovered": 1, 00:23:30.047 "num_base_bdevs_operational": 1, 00:23:30.047 "base_bdevs_list": [ 00:23:30.047 { 00:23:30.047 "name": null, 00:23:30.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.047 "is_configured": false, 00:23:30.047 "data_offset": 2048, 00:23:30.047 "data_size": 63488 00:23:30.047 }, 00:23:30.047 { 00:23:30.047 "name": "BaseBdev2", 00:23:30.047 "uuid": "12c7e2ec-d022-5652-ba3b-40d2c70192c8", 00:23:30.047 "is_configured": true, 00:23:30.047 "data_offset": 2048, 00:23:30.047 "data_size": 63488 00:23:30.047 } 00:23:30.047 ] 00:23:30.047 }' 00:23:30.047 00:37:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:30.047 00:37:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:30.047 00:37:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:30.047 00:37:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:30.048 00:37:23 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:30.305 [2024-04-24 00:37:24.006625] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:30.305 [2024-04-24 00:37:24.006674] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:30.305 [2024-04-24 00:37:24.022785] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3020 00:23:30.305 [2024-04-24 00:37:24.025016] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:30.305 00:37:24 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:31.676 "name": "raid_bdev1", 00:23:31.676 "uuid": "c25f018d-6015-410a-a017-92c610a2e038", 00:23:31.676 "strip_size_kb": 0, 00:23:31.676 "state": "online", 00:23:31.676 "raid_level": "raid1", 00:23:31.676 "superblock": true, 00:23:31.676 "num_base_bdevs": 2, 00:23:31.676 "num_base_bdevs_discovered": 2, 00:23:31.676 "num_base_bdevs_operational": 2, 00:23:31.676 "process": { 00:23:31.676 "type": "rebuild", 00:23:31.676 "target": "spare", 00:23:31.676 "progress": { 00:23:31.676 "blocks": 24576, 00:23:31.676 "percent": 38 00:23:31.676 } 00:23:31.676 }, 00:23:31.676 "base_bdevs_list": [ 00:23:31.676 { 00:23:31.676 "name": "spare", 00:23:31.676 "uuid": "1cc159a4-5d77-583c-8f60-396e3abf402d", 00:23:31.676 "is_configured": true, 00:23:31.676 "data_offset": 2048, 00:23:31.676 "data_size": 63488 00:23:31.676 }, 00:23:31.676 { 00:23:31.676 "name": "BaseBdev2", 00:23:31.676 "uuid": "12c7e2ec-d022-5652-ba3b-40d2c70192c8", 00:23:31.676 "is_configured": true, 00:23:31.676 "data_offset": 2048, 00:23:31.676 "data_size": 63488 00:23:31.676 } 00:23:31.676 ] 00:23:31.676 }' 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:31.676 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@657 -- # local timeout=468 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.676 00:37:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.001 00:37:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:32.001 "name": "raid_bdev1", 00:23:32.001 "uuid": "c25f018d-6015-410a-a017-92c610a2e038", 00:23:32.001 "strip_size_kb": 0, 00:23:32.001 "state": "online", 00:23:32.001 "raid_level": "raid1", 00:23:32.001 "superblock": true, 00:23:32.001 "num_base_bdevs": 2, 00:23:32.001 "num_base_bdevs_discovered": 2, 00:23:32.001 "num_base_bdevs_operational": 2, 00:23:32.001 "process": { 00:23:32.001 "type": "rebuild", 00:23:32.001 "target": "spare", 00:23:32.001 "progress": { 00:23:32.001 "blocks": 32768, 00:23:32.001 "percent": 51 00:23:32.001 } 00:23:32.001 }, 00:23:32.001 "base_bdevs_list": [ 00:23:32.001 { 00:23:32.001 "name": "spare", 00:23:32.001 "uuid": "1cc159a4-5d77-583c-8f60-396e3abf402d", 00:23:32.001 "is_configured": true, 00:23:32.001 "data_offset": 2048, 00:23:32.001 "data_size": 63488 00:23:32.001 }, 00:23:32.001 { 00:23:32.001 "name": "BaseBdev2", 00:23:32.001 "uuid": "12c7e2ec-d022-5652-ba3b-40d2c70192c8", 00:23:32.001 "is_configured": true, 00:23:32.001 "data_offset": 2048, 00:23:32.001 "data_size": 63488 00:23:32.001 } 00:23:32.001 ] 00:23:32.001 }' 00:23:32.001 00:37:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:32.001 00:37:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:32.001 00:37:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:32.260 00:37:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:32.260 00:37:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:33.192 00:37:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:33.192 00:37:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:33.192 00:37:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:33.192 00:37:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:33.192 00:37:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:33.192 00:37:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:33.192 00:37:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.192 00:37:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.450 00:37:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:33.450 "name": "raid_bdev1", 00:23:33.450 "uuid": "c25f018d-6015-410a-a017-92c610a2e038", 00:23:33.450 "strip_size_kb": 0, 00:23:33.450 "state": "online", 00:23:33.450 "raid_level": "raid1", 00:23:33.450 "superblock": true, 00:23:33.450 "num_base_bdevs": 2, 00:23:33.450 "num_base_bdevs_discovered": 2, 00:23:33.450 "num_base_bdevs_operational": 2, 00:23:33.450 "process": { 00:23:33.450 "type": "rebuild", 00:23:33.450 "target": "spare", 00:23:33.450 "progress": { 00:23:33.450 "blocks": 61440, 00:23:33.450 "percent": 96 00:23:33.450 } 00:23:33.450 }, 00:23:33.450 "base_bdevs_list": [ 00:23:33.450 { 00:23:33.450 "name": "spare", 00:23:33.450 "uuid": "1cc159a4-5d77-583c-8f60-396e3abf402d", 00:23:33.450 "is_configured": true, 00:23:33.450 "data_offset": 2048, 00:23:33.450 "data_size": 63488 00:23:33.450 }, 00:23:33.450 { 00:23:33.450 "name": "BaseBdev2", 00:23:33.450 "uuid": "12c7e2ec-d022-5652-ba3b-40d2c70192c8", 00:23:33.450 "is_configured": true, 00:23:33.450 "data_offset": 2048, 00:23:33.450 "data_size": 63488 00:23:33.450 } 00:23:33.450 ] 00:23:33.450 }' 00:23:33.450 00:37:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:33.450 00:37:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.450 00:37:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:33.450 [2024-04-24 00:37:27.151661] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:33.450 [2024-04-24 00:37:27.151732] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:33.450 [2024-04-24 00:37:27.151855] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.450 00:37:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.450 00:37:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:34.395 00:37:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:34.395 00:37:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:34.395 00:37:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:34.395 00:37:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:34.395 00:37:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:34.395 00:37:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:34.395 00:37:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.395 00:37:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.959 00:37:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:34.959 "name": "raid_bdev1", 00:23:34.959 "uuid": "c25f018d-6015-410a-a017-92c610a2e038", 00:23:34.959 "strip_size_kb": 0, 00:23:34.959 "state": "online", 00:23:34.959 "raid_level": "raid1", 00:23:34.959 "superblock": true, 00:23:34.959 "num_base_bdevs": 2, 00:23:34.959 "num_base_bdevs_discovered": 2, 00:23:34.959 "num_base_bdevs_operational": 2, 00:23:34.959 "base_bdevs_list": [ 00:23:34.959 { 00:23:34.959 "name": "spare", 00:23:34.959 "uuid": "1cc159a4-5d77-583c-8f60-396e3abf402d", 00:23:34.959 "is_configured": true, 00:23:34.959 "data_offset": 2048, 00:23:34.959 "data_size": 63488 00:23:34.959 }, 00:23:34.959 { 00:23:34.959 "name": "BaseBdev2", 00:23:34.959 "uuid": "12c7e2ec-d022-5652-ba3b-40d2c70192c8", 00:23:34.959 "is_configured": true, 00:23:34.959 "data_offset": 2048, 00:23:34.959 "data_size": 63488 00:23:34.959 } 00:23:34.959 ] 00:23:34.959 }' 00:23:34.959 00:37:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:34.959 00:37:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:34.959 00:37:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:34.959 00:37:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:34.959 00:37:28 -- bdev/bdev_raid.sh@660 -- # break 00:23:34.959 00:37:28 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:34.959 00:37:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:34.959 00:37:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:34.959 00:37:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:34.959 00:37:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:34.959 00:37:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.959 00:37:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.216 00:37:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:35.216 "name": "raid_bdev1", 00:23:35.216 "uuid": "c25f018d-6015-410a-a017-92c610a2e038", 00:23:35.216 "strip_size_kb": 0, 00:23:35.216 "state": "online", 00:23:35.216 "raid_level": "raid1", 00:23:35.216 "superblock": true, 00:23:35.216 "num_base_bdevs": 2, 00:23:35.216 "num_base_bdevs_discovered": 2, 00:23:35.216 "num_base_bdevs_operational": 2, 00:23:35.216 "base_bdevs_list": [ 00:23:35.216 { 00:23:35.216 "name": "spare", 00:23:35.216 "uuid": "1cc159a4-5d77-583c-8f60-396e3abf402d", 00:23:35.216 "is_configured": true, 00:23:35.216 "data_offset": 2048, 00:23:35.216 "data_size": 63488 00:23:35.216 }, 00:23:35.216 { 00:23:35.217 "name": "BaseBdev2", 00:23:35.217 "uuid": "12c7e2ec-d022-5652-ba3b-40d2c70192c8", 00:23:35.217 "is_configured": true, 00:23:35.217 "data_offset": 2048, 00:23:35.217 "data_size": 63488 00:23:35.217 } 00:23:35.217 ] 00:23:35.217 }' 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.217 00:37:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.474 00:37:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:35.474 "name": "raid_bdev1", 00:23:35.474 "uuid": "c25f018d-6015-410a-a017-92c610a2e038", 00:23:35.474 "strip_size_kb": 0, 00:23:35.474 "state": "online", 00:23:35.474 "raid_level": "raid1", 00:23:35.474 "superblock": true, 00:23:35.474 "num_base_bdevs": 2, 00:23:35.474 "num_base_bdevs_discovered": 2, 00:23:35.474 "num_base_bdevs_operational": 2, 00:23:35.474 "base_bdevs_list": [ 00:23:35.474 { 00:23:35.474 "name": "spare", 00:23:35.474 "uuid": "1cc159a4-5d77-583c-8f60-396e3abf402d", 00:23:35.474 "is_configured": true, 00:23:35.474 "data_offset": 2048, 00:23:35.474 "data_size": 63488 00:23:35.474 }, 00:23:35.474 { 00:23:35.474 "name": "BaseBdev2", 00:23:35.474 "uuid": "12c7e2ec-d022-5652-ba3b-40d2c70192c8", 00:23:35.474 "is_configured": true, 00:23:35.474 "data_offset": 2048, 00:23:35.474 "data_size": 63488 00:23:35.474 } 00:23:35.474 ] 00:23:35.474 }' 00:23:35.474 00:37:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:35.474 00:37:29 -- common/autotest_common.sh@10 -- # set +x 00:23:36.080 00:37:29 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:36.338 [2024-04-24 00:37:30.075519] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:36.338 [2024-04-24 00:37:30.075562] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:36.338 [2024-04-24 00:37:30.075634] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:36.338 [2024-04-24 00:37:30.075709] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:36.338 [2024-04-24 00:37:30.075720] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:23:36.338 00:37:30 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:36.338 00:37:30 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.596 00:37:30 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:36.596 00:37:30 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:36.596 00:37:30 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:36.596 00:37:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:36.596 00:37:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:36.596 00:37:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:36.596 00:37:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:36.596 00:37:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:36.596 00:37:30 -- bdev/nbd_common.sh@12 -- # local i 00:23:36.596 00:37:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:36.596 00:37:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:36.596 00:37:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:37.159 /dev/nbd0 00:23:37.159 00:37:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:37.159 00:37:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:37.159 00:37:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:37.159 00:37:30 -- common/autotest_common.sh@855 -- # local i 00:23:37.159 00:37:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:37.159 00:37:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:37.159 00:37:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:37.159 00:37:30 -- common/autotest_common.sh@859 -- # break 00:23:37.159 00:37:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:37.159 00:37:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:37.159 00:37:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:37.159 1+0 records in 00:23:37.159 1+0 records out 00:23:37.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100924 s, 4.1 MB/s 00:23:37.159 00:37:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:37.159 00:37:30 -- common/autotest_common.sh@872 -- # size=4096 00:23:37.159 00:37:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:37.159 00:37:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:37.159 00:37:30 -- common/autotest_common.sh@875 -- # return 0 00:23:37.159 00:37:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:37.159 00:37:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:37.159 00:37:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:37.416 /dev/nbd1 00:23:37.416 00:37:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:37.416 00:37:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:37.416 00:37:31 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:23:37.416 00:37:31 -- common/autotest_common.sh@855 -- # local i 00:23:37.416 00:37:31 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:37.416 00:37:31 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:37.416 00:37:31 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:23:37.416 00:37:31 -- common/autotest_common.sh@859 -- # break 00:23:37.416 00:37:31 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:37.416 00:37:31 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:37.416 00:37:31 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:37.416 1+0 records in 00:23:37.416 1+0 records out 00:23:37.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319737 s, 12.8 MB/s 00:23:37.416 00:37:31 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:37.416 00:37:31 -- common/autotest_common.sh@872 -- # size=4096 00:23:37.416 00:37:31 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:37.416 00:37:31 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:37.416 00:37:31 -- common/autotest_common.sh@875 -- # return 0 00:23:37.416 00:37:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:37.416 00:37:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:37.416 00:37:31 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:37.673 00:37:31 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:37.673 00:37:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:37.673 00:37:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:37.673 00:37:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:37.673 00:37:31 -- bdev/nbd_common.sh@51 -- # local i 00:23:37.673 00:37:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.673 00:37:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:37.930 00:37:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:37.930 00:37:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:37.930 00:37:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:37.930 00:37:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:37.930 00:37:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:37.930 00:37:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:37.930 00:37:31 -- bdev/nbd_common.sh@41 -- # break 00:23:37.930 00:37:31 -- bdev/nbd_common.sh@45 -- # return 0 00:23:37.930 00:37:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.930 00:37:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:38.187 00:37:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:38.187 00:37:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:38.187 00:37:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:38.187 00:37:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:38.187 00:37:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:38.187 00:37:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:38.187 00:37:31 -- bdev/nbd_common.sh@41 -- # break 00:23:38.187 00:37:31 -- bdev/nbd_common.sh@45 -- # return 0 00:23:38.187 00:37:31 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:38.187 00:37:31 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:38.187 00:37:31 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:38.187 00:37:31 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:38.443 00:37:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:38.700 [2024-04-24 00:37:32.321545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:38.700 [2024-04-24 00:37:32.321649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.700 [2024-04-24 00:37:32.321685] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:38.700 [2024-04-24 00:37:32.321713] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.700 [2024-04-24 00:37:32.324441] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.700 [2024-04-24 00:37:32.324539] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:38.700 [2024-04-24 00:37:32.324670] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:38.700 [2024-04-24 00:37:32.324730] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:38.700 BaseBdev1 00:23:38.700 00:37:32 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:38.700 00:37:32 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:23:38.700 00:37:32 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:23:38.956 00:37:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:38.956 [2024-04-24 00:37:32.741586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:38.956 [2024-04-24 00:37:32.741701] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.956 [2024-04-24 00:37:32.741741] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:38.956 [2024-04-24 00:37:32.741773] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.956 [2024-04-24 00:37:32.742258] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.956 [2024-04-24 00:37:32.742309] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:38.956 [2024-04-24 00:37:32.742443] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:23:38.956 [2024-04-24 00:37:32.742456] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:23:38.956 [2024-04-24 00:37:32.742464] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:38.956 [2024-04-24 00:37:32.742489] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:23:38.956 [2024-04-24 00:37:32.742570] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:38.956 BaseBdev2 00:23:39.213 00:37:32 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:39.213 00:37:32 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:39.471 [2024-04-24 00:37:33.193750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:39.471 [2024-04-24 00:37:33.193849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.471 [2024-04-24 00:37:33.193893] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:39.471 [2024-04-24 00:37:33.193915] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.471 [2024-04-24 00:37:33.194464] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.471 [2024-04-24 00:37:33.194509] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:39.471 [2024-04-24 00:37:33.194640] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:39.471 [2024-04-24 00:37:33.194670] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:39.471 spare 00:23:39.471 00:37:33 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:39.471 00:37:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:39.471 00:37:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:39.471 00:37:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:39.471 00:37:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:39.471 00:37:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:39.471 00:37:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:39.471 00:37:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:39.471 00:37:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:39.471 00:37:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:39.471 00:37:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.471 00:37:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.729 [2024-04-24 00:37:33.294769] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:23:39.729 [2024-04-24 00:37:33.294811] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:39.729 [2024-04-24 00:37:33.294994] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:23:39.729 [2024-04-24 00:37:33.295411] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:23:39.729 [2024-04-24 00:37:33.295424] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:23:39.729 [2024-04-24 00:37:33.295561] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.988 00:37:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:39.988 "name": "raid_bdev1", 00:23:39.988 "uuid": "c25f018d-6015-410a-a017-92c610a2e038", 00:23:39.988 "strip_size_kb": 0, 00:23:39.988 "state": "online", 00:23:39.988 "raid_level": "raid1", 00:23:39.988 "superblock": true, 00:23:39.988 "num_base_bdevs": 2, 00:23:39.988 "num_base_bdevs_discovered": 2, 00:23:39.988 "num_base_bdevs_operational": 2, 00:23:39.988 "base_bdevs_list": [ 00:23:39.988 { 00:23:39.988 "name": "spare", 00:23:39.988 "uuid": "1cc159a4-5d77-583c-8f60-396e3abf402d", 00:23:39.988 "is_configured": true, 00:23:39.988 "data_offset": 2048, 00:23:39.988 "data_size": 63488 00:23:39.988 }, 00:23:39.988 { 00:23:39.988 "name": "BaseBdev2", 00:23:39.988 "uuid": "12c7e2ec-d022-5652-ba3b-40d2c70192c8", 00:23:39.988 "is_configured": true, 00:23:39.988 "data_offset": 2048, 00:23:39.988 "data_size": 63488 00:23:39.988 } 00:23:39.988 ] 00:23:39.988 }' 00:23:39.988 00:37:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:39.988 00:37:33 -- common/autotest_common.sh@10 -- # set +x 00:23:40.554 00:37:34 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:40.554 00:37:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:40.554 00:37:34 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:40.554 00:37:34 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:40.554 00:37:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:40.554 00:37:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.554 00:37:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.811 00:37:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:40.811 "name": "raid_bdev1", 00:23:40.811 "uuid": "c25f018d-6015-410a-a017-92c610a2e038", 00:23:40.811 "strip_size_kb": 0, 00:23:40.811 "state": "online", 00:23:40.811 "raid_level": "raid1", 00:23:40.811 "superblock": true, 00:23:40.811 "num_base_bdevs": 2, 00:23:40.811 "num_base_bdevs_discovered": 2, 00:23:40.811 "num_base_bdevs_operational": 2, 00:23:40.811 "base_bdevs_list": [ 00:23:40.811 { 00:23:40.811 "name": "spare", 00:23:40.811 "uuid": "1cc159a4-5d77-583c-8f60-396e3abf402d", 00:23:40.811 "is_configured": true, 00:23:40.811 "data_offset": 2048, 00:23:40.811 "data_size": 63488 00:23:40.811 }, 00:23:40.811 { 00:23:40.811 "name": "BaseBdev2", 00:23:40.811 "uuid": "12c7e2ec-d022-5652-ba3b-40d2c70192c8", 00:23:40.811 "is_configured": true, 00:23:40.811 "data_offset": 2048, 00:23:40.811 "data_size": 63488 00:23:40.811 } 00:23:40.811 ] 00:23:40.811 }' 00:23:40.811 00:37:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:40.811 00:37:34 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:40.811 00:37:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:41.069 00:37:34 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:41.069 00:37:34 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.069 00:37:34 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:41.328 00:37:34 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.328 00:37:34 -- bdev/bdev_raid.sh@709 -- # killprocess 131758 00:23:41.328 00:37:34 -- common/autotest_common.sh@936 -- # '[' -z 131758 ']' 00:23:41.328 00:37:34 -- common/autotest_common.sh@940 -- # kill -0 131758 00:23:41.328 00:37:34 -- common/autotest_common.sh@941 -- # uname 00:23:41.328 00:37:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:41.328 00:37:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131758 00:23:41.328 00:37:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:41.328 00:37:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:41.328 00:37:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131758' 00:23:41.328 killing process with pid 131758 00:23:41.328 00:37:34 -- common/autotest_common.sh@955 -- # kill 131758 00:23:41.328 Received shutdown signal, test time was about 60.000000 seconds 00:23:41.328 00:23:41.328 Latency(us) 00:23:41.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.328 =================================================================================================================== 00:23:41.328 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:41.328 [2024-04-24 00:37:34.930965] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:41.328 [2024-04-24 00:37:34.931050] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:41.328 [2024-04-24 00:37:34.931112] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:41.328 [2024-04-24 00:37:34.931122] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:23:41.328 00:37:34 -- common/autotest_common.sh@960 -- # wait 131758 00:23:41.586 [2024-04-24 00:37:35.253243] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:42.964 ************************************ 00:23:42.964 END TEST raid_rebuild_test_sb 00:23:42.964 ************************************ 00:23:42.964 00:37:36 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:42.964 00:23:42.964 real 0m28.514s 00:23:42.964 user 0m40.497s 00:23:42.964 sys 0m5.419s 00:23:42.964 00:37:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:42.964 00:37:36 -- common/autotest_common.sh@10 -- # set +x 00:23:42.964 00:37:36 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:23:42.964 00:37:36 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:42.964 00:37:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:42.964 00:37:36 -- common/autotest_common.sh@10 -- # set +x 00:23:43.222 ************************************ 00:23:43.222 START TEST raid_rebuild_test_io 00:23:43.222 ************************************ 00:23:43.222 00:37:36 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 false true 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@544 -- # raid_pid=132421 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132421 /var/tmp/spdk-raid.sock 00:23:43.222 00:37:36 -- common/autotest_common.sh@817 -- # '[' -z 132421 ']' 00:23:43.222 00:37:36 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:43.222 00:37:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:43.223 00:37:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:43.223 00:37:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:43.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:43.223 00:37:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:43.223 00:37:36 -- common/autotest_common.sh@10 -- # set +x 00:23:43.223 [2024-04-24 00:37:36.853251] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:23:43.223 [2024-04-24 00:37:36.853450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132421 ] 00:23:43.223 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:43.223 Zero copy mechanism will not be used. 00:23:43.481 [2024-04-24 00:37:37.031561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.481 [2024-04-24 00:37:37.248645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.738 [2024-04-24 00:37:37.472342] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:43.996 00:37:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:43.996 00:37:37 -- common/autotest_common.sh@850 -- # return 0 00:23:43.996 00:37:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:43.996 00:37:37 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:43.996 00:37:37 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:44.254 BaseBdev1 00:23:44.254 00:37:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:44.254 00:37:38 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:44.254 00:37:38 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:44.588 BaseBdev2 00:23:44.588 00:37:38 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:44.846 spare_malloc 00:23:44.846 00:37:38 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:45.104 spare_delay 00:23:45.104 00:37:38 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:45.363 [2024-04-24 00:37:39.032418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:45.363 [2024-04-24 00:37:39.032509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.363 [2024-04-24 00:37:39.032545] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:23:45.363 [2024-04-24 00:37:39.032597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.363 [2024-04-24 00:37:39.035259] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.363 [2024-04-24 00:37:39.035314] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:45.363 spare 00:23:45.363 00:37:39 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:45.621 [2024-04-24 00:37:39.308545] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:45.621 [2024-04-24 00:37:39.310795] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:45.621 [2024-04-24 00:37:39.310883] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:23:45.621 [2024-04-24 00:37:39.310895] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:45.621 [2024-04-24 00:37:39.311079] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:23:45.621 [2024-04-24 00:37:39.311435] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:23:45.621 [2024-04-24 00:37:39.311456] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:23:45.621 [2024-04-24 00:37:39.311638] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.621 00:37:39 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:45.621 00:37:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:45.621 00:37:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:45.621 00:37:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:45.621 00:37:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:45.621 00:37:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:45.621 00:37:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:45.621 00:37:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:45.621 00:37:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:45.621 00:37:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:45.621 00:37:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.621 00:37:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.879 00:37:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:45.879 "name": "raid_bdev1", 00:23:45.879 "uuid": "31509f5d-9883-441b-9f55-dbb994fe2e23", 00:23:45.879 "strip_size_kb": 0, 00:23:45.879 "state": "online", 00:23:45.879 "raid_level": "raid1", 00:23:45.879 "superblock": false, 00:23:45.879 "num_base_bdevs": 2, 00:23:45.879 "num_base_bdevs_discovered": 2, 00:23:45.879 "num_base_bdevs_operational": 2, 00:23:45.879 "base_bdevs_list": [ 00:23:45.879 { 00:23:45.879 "name": "BaseBdev1", 00:23:45.879 "uuid": "3b44b8c3-89a4-46b7-a4bf-cd4a885e0b0a", 00:23:45.879 "is_configured": true, 00:23:45.879 "data_offset": 0, 00:23:45.879 "data_size": 65536 00:23:45.879 }, 00:23:45.879 { 00:23:45.879 "name": "BaseBdev2", 00:23:45.879 "uuid": "6431bbb4-9821-4bfc-9fc6-eabedfb42507", 00:23:45.879 "is_configured": true, 00:23:45.879 "data_offset": 0, 00:23:45.879 "data_size": 65536 00:23:45.879 } 00:23:45.879 ] 00:23:45.879 }' 00:23:45.879 00:37:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:45.879 00:37:39 -- common/autotest_common.sh@10 -- # set +x 00:23:46.445 00:37:40 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:46.445 00:37:40 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:46.703 [2024-04-24 00:37:40.360934] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:46.703 00:37:40 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:23:46.703 00:37:40 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.703 00:37:40 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:46.994 00:37:40 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:46.994 00:37:40 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:46.994 00:37:40 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:46.994 00:37:40 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:46.994 [2024-04-24 00:37:40.704889] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:46.994 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:46.994 Zero copy mechanism will not be used. 00:23:46.994 Running I/O for 60 seconds... 00:23:47.252 [2024-04-24 00:37:40.857001] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:47.252 [2024-04-24 00:37:40.857221] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:23:47.252 00:37:40 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:47.252 00:37:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:47.252 00:37:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:47.252 00:37:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:47.252 00:37:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:47.252 00:37:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:47.252 00:37:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:47.252 00:37:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:47.252 00:37:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:47.252 00:37:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:47.252 00:37:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.252 00:37:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.510 00:37:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:47.511 "name": "raid_bdev1", 00:23:47.511 "uuid": "31509f5d-9883-441b-9f55-dbb994fe2e23", 00:23:47.511 "strip_size_kb": 0, 00:23:47.511 "state": "online", 00:23:47.511 "raid_level": "raid1", 00:23:47.511 "superblock": false, 00:23:47.511 "num_base_bdevs": 2, 00:23:47.511 "num_base_bdevs_discovered": 1, 00:23:47.511 "num_base_bdevs_operational": 1, 00:23:47.511 "base_bdevs_list": [ 00:23:47.511 { 00:23:47.511 "name": null, 00:23:47.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.511 "is_configured": false, 00:23:47.511 "data_offset": 0, 00:23:47.511 "data_size": 65536 00:23:47.511 }, 00:23:47.511 { 00:23:47.511 "name": "BaseBdev2", 00:23:47.511 "uuid": "6431bbb4-9821-4bfc-9fc6-eabedfb42507", 00:23:47.511 "is_configured": true, 00:23:47.511 "data_offset": 0, 00:23:47.511 "data_size": 65536 00:23:47.511 } 00:23:47.511 ] 00:23:47.511 }' 00:23:47.511 00:37:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:47.511 00:37:41 -- common/autotest_common.sh@10 -- # set +x 00:23:48.076 00:37:41 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:48.335 [2024-04-24 00:37:42.104489] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:48.335 [2024-04-24 00:37:42.104546] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:48.593 [2024-04-24 00:37:42.150851] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:48.593 [2024-04-24 00:37:42.153160] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:48.593 00:37:42 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:48.593 [2024-04-24 00:37:42.255252] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:48.593 [2024-04-24 00:37:42.255796] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:48.593 [2024-04-24 00:37:42.372775] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:48.593 [2024-04-24 00:37:42.373104] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:49.209 [2024-04-24 00:37:42.721617] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:49.209 [2024-04-24 00:37:42.945721] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:49.209 [2024-04-24 00:37:42.946030] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:49.484 00:37:43 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:49.484 00:37:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:49.484 00:37:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:49.484 00:37:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:49.484 00:37:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:49.484 00:37:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.484 00:37:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.742 [2024-04-24 00:37:43.288337] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:49.742 00:37:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:49.742 "name": "raid_bdev1", 00:23:49.743 "uuid": "31509f5d-9883-441b-9f55-dbb994fe2e23", 00:23:49.743 "strip_size_kb": 0, 00:23:49.743 "state": "online", 00:23:49.743 "raid_level": "raid1", 00:23:49.743 "superblock": false, 00:23:49.743 "num_base_bdevs": 2, 00:23:49.743 "num_base_bdevs_discovered": 2, 00:23:49.743 "num_base_bdevs_operational": 2, 00:23:49.743 "process": { 00:23:49.743 "type": "rebuild", 00:23:49.743 "target": "spare", 00:23:49.743 "progress": { 00:23:49.743 "blocks": 14336, 00:23:49.743 "percent": 21 00:23:49.743 } 00:23:49.743 }, 00:23:49.743 "base_bdevs_list": [ 00:23:49.743 { 00:23:49.743 "name": "spare", 00:23:49.743 "uuid": "040fc539-e12f-5009-8e5e-b847d46f7006", 00:23:49.743 "is_configured": true, 00:23:49.743 "data_offset": 0, 00:23:49.743 "data_size": 65536 00:23:49.743 }, 00:23:49.743 { 00:23:49.743 "name": "BaseBdev2", 00:23:49.743 "uuid": "6431bbb4-9821-4bfc-9fc6-eabedfb42507", 00:23:49.743 "is_configured": true, 00:23:49.743 "data_offset": 0, 00:23:49.743 "data_size": 65536 00:23:49.743 } 00:23:49.743 ] 00:23:49.743 }' 00:23:49.743 00:37:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:49.743 00:37:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:49.743 00:37:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:49.743 [2024-04-24 00:37:43.497460] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:49.743 00:37:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.743 00:37:43 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:50.001 [2024-04-24 00:37:43.706014] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:50.002 [2024-04-24 00:37:43.729131] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:50.259 [2024-04-24 00:37:43.836814] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:50.259 [2024-04-24 00:37:43.839697] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:50.259 [2024-04-24 00:37:43.888938] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:23:50.259 00:37:43 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:50.259 00:37:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:50.259 00:37:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:50.259 00:37:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:50.259 00:37:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:50.259 00:37:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:23:50.259 00:37:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:50.259 00:37:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:50.259 00:37:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:50.259 00:37:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:50.259 00:37:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.260 00:37:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.518 00:37:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:50.518 "name": "raid_bdev1", 00:23:50.518 "uuid": "31509f5d-9883-441b-9f55-dbb994fe2e23", 00:23:50.518 "strip_size_kb": 0, 00:23:50.518 "state": "online", 00:23:50.518 "raid_level": "raid1", 00:23:50.518 "superblock": false, 00:23:50.518 "num_base_bdevs": 2, 00:23:50.518 "num_base_bdevs_discovered": 1, 00:23:50.518 "num_base_bdevs_operational": 1, 00:23:50.518 "base_bdevs_list": [ 00:23:50.518 { 00:23:50.518 "name": null, 00:23:50.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.518 "is_configured": false, 00:23:50.518 "data_offset": 0, 00:23:50.518 "data_size": 65536 00:23:50.518 }, 00:23:50.518 { 00:23:50.518 "name": "BaseBdev2", 00:23:50.518 "uuid": "6431bbb4-9821-4bfc-9fc6-eabedfb42507", 00:23:50.518 "is_configured": true, 00:23:50.518 "data_offset": 0, 00:23:50.518 "data_size": 65536 00:23:50.518 } 00:23:50.518 ] 00:23:50.518 }' 00:23:50.518 00:37:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:50.518 00:37:44 -- common/autotest_common.sh@10 -- # set +x 00:23:51.085 00:37:44 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:51.085 00:37:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:51.085 00:37:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:51.085 00:37:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:51.085 00:37:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:51.085 00:37:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.085 00:37:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.343 00:37:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:51.343 "name": "raid_bdev1", 00:23:51.343 "uuid": "31509f5d-9883-441b-9f55-dbb994fe2e23", 00:23:51.343 "strip_size_kb": 0, 00:23:51.343 "state": "online", 00:23:51.343 "raid_level": "raid1", 00:23:51.343 "superblock": false, 00:23:51.343 "num_base_bdevs": 2, 00:23:51.343 "num_base_bdevs_discovered": 1, 00:23:51.343 "num_base_bdevs_operational": 1, 00:23:51.343 "base_bdevs_list": [ 00:23:51.343 { 00:23:51.343 "name": null, 00:23:51.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.343 "is_configured": false, 00:23:51.343 "data_offset": 0, 00:23:51.343 "data_size": 65536 00:23:51.343 }, 00:23:51.343 { 00:23:51.343 "name": "BaseBdev2", 00:23:51.343 "uuid": "6431bbb4-9821-4bfc-9fc6-eabedfb42507", 00:23:51.343 "is_configured": true, 00:23:51.343 "data_offset": 0, 00:23:51.343 "data_size": 65536 00:23:51.343 } 00:23:51.343 ] 00:23:51.343 }' 00:23:51.343 00:37:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:51.343 00:37:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:51.343 00:37:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:51.343 00:37:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:51.343 00:37:45 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:51.912 [2024-04-24 00:37:45.447388] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:51.913 [2024-04-24 00:37:45.447446] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:51.913 00:37:45 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:51.913 [2024-04-24 00:37:45.520585] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:51.913 [2024-04-24 00:37:45.522865] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:51.913 [2024-04-24 00:37:45.640126] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:51.913 [2024-04-24 00:37:45.640670] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:52.171 [2024-04-24 00:37:45.769313] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:52.171 [2024-04-24 00:37:45.769624] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:52.430 [2024-04-24 00:37:46.118661] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:52.689 [2024-04-24 00:37:46.241839] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:52.948 00:37:46 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:52.948 00:37:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:52.948 00:37:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:52.948 00:37:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:52.948 00:37:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:52.948 00:37:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.948 00:37:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.948 [2024-04-24 00:37:46.581395] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:53.207 "name": "raid_bdev1", 00:23:53.207 "uuid": "31509f5d-9883-441b-9f55-dbb994fe2e23", 00:23:53.207 "strip_size_kb": 0, 00:23:53.207 "state": "online", 00:23:53.207 "raid_level": "raid1", 00:23:53.207 "superblock": false, 00:23:53.207 "num_base_bdevs": 2, 00:23:53.207 "num_base_bdevs_discovered": 2, 00:23:53.207 "num_base_bdevs_operational": 2, 00:23:53.207 "process": { 00:23:53.207 "type": "rebuild", 00:23:53.207 "target": "spare", 00:23:53.207 "progress": { 00:23:53.207 "blocks": 18432, 00:23:53.207 "percent": 28 00:23:53.207 } 00:23:53.207 }, 00:23:53.207 "base_bdevs_list": [ 00:23:53.207 { 00:23:53.207 "name": "spare", 00:23:53.207 "uuid": "040fc539-e12f-5009-8e5e-b847d46f7006", 00:23:53.207 "is_configured": true, 00:23:53.207 "data_offset": 0, 00:23:53.207 "data_size": 65536 00:23:53.207 }, 00:23:53.207 { 00:23:53.207 "name": "BaseBdev2", 00:23:53.207 "uuid": "6431bbb4-9821-4bfc-9fc6-eabedfb42507", 00:23:53.207 "is_configured": true, 00:23:53.207 "data_offset": 0, 00:23:53.207 "data_size": 65536 00:23:53.207 } 00:23:53.207 ] 00:23:53.207 }' 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:53.207 [2024-04-24 00:37:46.896282] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:53.207 [2024-04-24 00:37:46.896827] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@657 -- # local timeout=489 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.207 00:37:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.465 00:37:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:53.465 "name": "raid_bdev1", 00:23:53.465 "uuid": "31509f5d-9883-441b-9f55-dbb994fe2e23", 00:23:53.465 "strip_size_kb": 0, 00:23:53.465 "state": "online", 00:23:53.465 "raid_level": "raid1", 00:23:53.465 "superblock": false, 00:23:53.465 "num_base_bdevs": 2, 00:23:53.465 "num_base_bdevs_discovered": 2, 00:23:53.465 "num_base_bdevs_operational": 2, 00:23:53.465 "process": { 00:23:53.466 "type": "rebuild", 00:23:53.466 "target": "spare", 00:23:53.466 "progress": { 00:23:53.466 "blocks": 22528, 00:23:53.466 "percent": 34 00:23:53.466 } 00:23:53.466 }, 00:23:53.466 "base_bdevs_list": [ 00:23:53.466 { 00:23:53.466 "name": "spare", 00:23:53.466 "uuid": "040fc539-e12f-5009-8e5e-b847d46f7006", 00:23:53.466 "is_configured": true, 00:23:53.466 "data_offset": 0, 00:23:53.466 "data_size": 65536 00:23:53.466 }, 00:23:53.466 { 00:23:53.466 "name": "BaseBdev2", 00:23:53.466 "uuid": "6431bbb4-9821-4bfc-9fc6-eabedfb42507", 00:23:53.466 "is_configured": true, 00:23:53.466 "data_offset": 0, 00:23:53.466 "data_size": 65536 00:23:53.466 } 00:23:53.466 ] 00:23:53.466 }' 00:23:53.466 00:37:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:53.466 00:37:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:53.466 00:37:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:53.723 00:37:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:53.724 00:37:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:53.724 [2024-04-24 00:37:47.340565] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:54.289 [2024-04-24 00:37:47.806729] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:54.289 [2024-04-24 00:37:48.008519] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:54.289 [2024-04-24 00:37:48.008849] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:54.549 [2024-04-24 00:37:48.262640] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:23:54.549 00:37:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:54.549 00:37:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.549 00:37:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:54.549 00:37:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:54.549 00:37:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:54.549 00:37:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:54.549 00:37:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.549 00:37:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.808 [2024-04-24 00:37:48.472625] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:54.808 [2024-04-24 00:37:48.472925] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:23:54.808 00:37:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:54.808 "name": "raid_bdev1", 00:23:54.808 "uuid": "31509f5d-9883-441b-9f55-dbb994fe2e23", 00:23:54.808 "strip_size_kb": 0, 00:23:54.808 "state": "online", 00:23:54.808 "raid_level": "raid1", 00:23:54.808 "superblock": false, 00:23:54.808 "num_base_bdevs": 2, 00:23:54.808 "num_base_bdevs_discovered": 2, 00:23:54.808 "num_base_bdevs_operational": 2, 00:23:54.808 "process": { 00:23:54.808 "type": "rebuild", 00:23:54.808 "target": "spare", 00:23:54.808 "progress": { 00:23:54.808 "blocks": 40960, 00:23:54.808 "percent": 62 00:23:54.808 } 00:23:54.808 }, 00:23:54.808 "base_bdevs_list": [ 00:23:54.808 { 00:23:54.808 "name": "spare", 00:23:54.808 "uuid": "040fc539-e12f-5009-8e5e-b847d46f7006", 00:23:54.808 "is_configured": true, 00:23:54.808 "data_offset": 0, 00:23:54.808 "data_size": 65536 00:23:54.808 }, 00:23:54.808 { 00:23:54.808 "name": "BaseBdev2", 00:23:54.808 "uuid": "6431bbb4-9821-4bfc-9fc6-eabedfb42507", 00:23:54.808 "is_configured": true, 00:23:54.808 "data_offset": 0, 00:23:54.808 "data_size": 65536 00:23:54.808 } 00:23:54.808 ] 00:23:54.808 }' 00:23:54.808 00:37:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:55.066 00:37:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:55.066 00:37:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:55.066 00:37:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:55.066 00:37:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:55.634 [2024-04-24 00:37:49.156490] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:23:55.891 00:37:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:55.891 00:37:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:55.891 00:37:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:55.891 00:37:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:55.891 00:37:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:55.891 00:37:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:55.891 00:37:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.891 00:37:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.148 [2024-04-24 00:37:49.917114] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:56.406 00:37:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:56.406 "name": "raid_bdev1", 00:23:56.406 "uuid": "31509f5d-9883-441b-9f55-dbb994fe2e23", 00:23:56.406 "strip_size_kb": 0, 00:23:56.406 "state": "online", 00:23:56.406 "raid_level": "raid1", 00:23:56.406 "superblock": false, 00:23:56.406 "num_base_bdevs": 2, 00:23:56.406 "num_base_bdevs_discovered": 2, 00:23:56.406 "num_base_bdevs_operational": 2, 00:23:56.406 "process": { 00:23:56.406 "type": "rebuild", 00:23:56.406 "target": "spare", 00:23:56.406 "progress": { 00:23:56.406 "blocks": 65536, 00:23:56.406 "percent": 100 00:23:56.406 } 00:23:56.406 }, 00:23:56.406 "base_bdevs_list": [ 00:23:56.406 { 00:23:56.406 "name": "spare", 00:23:56.406 "uuid": "040fc539-e12f-5009-8e5e-b847d46f7006", 00:23:56.406 "is_configured": true, 00:23:56.406 "data_offset": 0, 00:23:56.406 "data_size": 65536 00:23:56.406 }, 00:23:56.406 { 00:23:56.406 "name": "BaseBdev2", 00:23:56.406 "uuid": "6431bbb4-9821-4bfc-9fc6-eabedfb42507", 00:23:56.406 "is_configured": true, 00:23:56.406 "data_offset": 0, 00:23:56.406 "data_size": 65536 00:23:56.406 } 00:23:56.406 ] 00:23:56.406 }' 00:23:56.406 00:37:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:56.406 00:37:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:56.406 00:37:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:56.406 [2024-04-24 00:37:50.023982] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:56.406 [2024-04-24 00:37:50.027025] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.406 00:37:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:56.406 00:37:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:57.343 00:37:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:57.343 00:37:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.343 00:37:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:57.343 00:37:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:57.343 00:37:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:57.343 00:37:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.343 00:37:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.343 00:37:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.601 00:37:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.601 "name": "raid_bdev1", 00:23:57.601 "uuid": "31509f5d-9883-441b-9f55-dbb994fe2e23", 00:23:57.601 "strip_size_kb": 0, 00:23:57.601 "state": "online", 00:23:57.601 "raid_level": "raid1", 00:23:57.601 "superblock": false, 00:23:57.601 "num_base_bdevs": 2, 00:23:57.601 "num_base_bdevs_discovered": 2, 00:23:57.601 "num_base_bdevs_operational": 2, 00:23:57.601 "base_bdevs_list": [ 00:23:57.601 { 00:23:57.601 "name": "spare", 00:23:57.601 "uuid": "040fc539-e12f-5009-8e5e-b847d46f7006", 00:23:57.601 "is_configured": true, 00:23:57.601 "data_offset": 0, 00:23:57.601 "data_size": 65536 00:23:57.601 }, 00:23:57.601 { 00:23:57.601 "name": "BaseBdev2", 00:23:57.602 "uuid": "6431bbb4-9821-4bfc-9fc6-eabedfb42507", 00:23:57.602 "is_configured": true, 00:23:57.602 "data_offset": 0, 00:23:57.602 "data_size": 65536 00:23:57.602 } 00:23:57.602 ] 00:23:57.602 }' 00:23:57.602 00:37:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:57.602 00:37:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:57.602 00:37:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:57.602 00:37:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:57.602 00:37:51 -- bdev/bdev_raid.sh@660 -- # break 00:23:57.602 00:37:51 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:57.602 00:37:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:57.602 00:37:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:57.602 00:37:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:57.602 00:37:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.602 00:37:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.602 00:37:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.860 "name": "raid_bdev1", 00:23:57.860 "uuid": "31509f5d-9883-441b-9f55-dbb994fe2e23", 00:23:57.860 "strip_size_kb": 0, 00:23:57.860 "state": "online", 00:23:57.860 "raid_level": "raid1", 00:23:57.860 "superblock": false, 00:23:57.860 "num_base_bdevs": 2, 00:23:57.860 "num_base_bdevs_discovered": 2, 00:23:57.860 "num_base_bdevs_operational": 2, 00:23:57.860 "base_bdevs_list": [ 00:23:57.860 { 00:23:57.860 "name": "spare", 00:23:57.860 "uuid": "040fc539-e12f-5009-8e5e-b847d46f7006", 00:23:57.860 "is_configured": true, 00:23:57.860 "data_offset": 0, 00:23:57.860 "data_size": 65536 00:23:57.860 }, 00:23:57.860 { 00:23:57.860 "name": "BaseBdev2", 00:23:57.860 "uuid": "6431bbb4-9821-4bfc-9fc6-eabedfb42507", 00:23:57.860 "is_configured": true, 00:23:57.860 "data_offset": 0, 00:23:57.860 "data_size": 65536 00:23:57.860 } 00:23:57.860 ] 00:23:57.860 }' 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:57.860 00:37:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:58.118 00:37:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.118 00:37:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.375 00:37:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:58.375 "name": "raid_bdev1", 00:23:58.375 "uuid": "31509f5d-9883-441b-9f55-dbb994fe2e23", 00:23:58.375 "strip_size_kb": 0, 00:23:58.375 "state": "online", 00:23:58.375 "raid_level": "raid1", 00:23:58.375 "superblock": false, 00:23:58.375 "num_base_bdevs": 2, 00:23:58.375 "num_base_bdevs_discovered": 2, 00:23:58.375 "num_base_bdevs_operational": 2, 00:23:58.375 "base_bdevs_list": [ 00:23:58.375 { 00:23:58.375 "name": "spare", 00:23:58.375 "uuid": "040fc539-e12f-5009-8e5e-b847d46f7006", 00:23:58.375 "is_configured": true, 00:23:58.375 "data_offset": 0, 00:23:58.375 "data_size": 65536 00:23:58.375 }, 00:23:58.375 { 00:23:58.375 "name": "BaseBdev2", 00:23:58.375 "uuid": "6431bbb4-9821-4bfc-9fc6-eabedfb42507", 00:23:58.375 "is_configured": true, 00:23:58.375 "data_offset": 0, 00:23:58.375 "data_size": 65536 00:23:58.375 } 00:23:58.375 ] 00:23:58.375 }' 00:23:58.375 00:37:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:58.375 00:37:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.943 00:37:52 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:58.943 [2024-04-24 00:37:52.716644] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:58.943 [2024-04-24 00:37:52.716685] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:59.202 00:23:59.202 Latency(us) 00:23:59.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:59.202 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:59.202 raid_bdev1 : 12.06 106.21 318.62 0.00 0.00 12617.55 378.39 116342.00 00:23:59.202 =================================================================================================================== 00:23:59.202 Total : 106.21 318.62 0.00 0.00 12617.55 378.39 116342.00 00:23:59.202 [2024-04-24 00:37:52.794490] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.202 [2024-04-24 00:37:52.794543] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:59.202 [2024-04-24 00:37:52.794622] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:59.202 [2024-04-24 00:37:52.794634] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:23:59.202 0 00:23:59.202 00:37:52 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.202 00:37:52 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:59.459 00:37:53 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:59.459 00:37:53 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:59.459 00:37:53 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:59.459 00:37:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:59.459 00:37:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:59.459 00:37:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:59.459 00:37:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:59.459 00:37:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:59.459 00:37:53 -- bdev/nbd_common.sh@12 -- # local i 00:23:59.459 00:37:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:59.459 00:37:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:59.459 00:37:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:59.718 /dev/nbd0 00:23:59.718 00:37:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:59.718 00:37:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:59.718 00:37:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:59.718 00:37:53 -- common/autotest_common.sh@855 -- # local i 00:23:59.718 00:37:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:59.718 00:37:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:59.718 00:37:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:59.718 00:37:53 -- common/autotest_common.sh@859 -- # break 00:23:59.718 00:37:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:59.718 00:37:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:59.718 00:37:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:59.718 1+0 records in 00:23:59.718 1+0 records out 00:23:59.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370886 s, 11.0 MB/s 00:23:59.718 00:37:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.718 00:37:53 -- common/autotest_common.sh@872 -- # size=4096 00:23:59.718 00:37:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.718 00:37:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:59.718 00:37:53 -- common/autotest_common.sh@875 -- # return 0 00:23:59.718 00:37:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:59.718 00:37:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:59.718 00:37:53 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:59.718 00:37:53 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:23:59.718 00:37:53 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:23:59.718 00:37:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:59.718 00:37:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:59.718 00:37:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:59.718 00:37:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:59.718 00:37:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:59.718 00:37:53 -- bdev/nbd_common.sh@12 -- # local i 00:23:59.718 00:37:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:59.718 00:37:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:59.718 00:37:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:59.976 /dev/nbd1 00:23:59.976 00:37:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:59.976 00:37:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:59.976 00:37:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:23:59.976 00:37:53 -- common/autotest_common.sh@855 -- # local i 00:23:59.976 00:37:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:59.976 00:37:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:59.976 00:37:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:23:59.976 00:37:53 -- common/autotest_common.sh@859 -- # break 00:23:59.976 00:37:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:59.976 00:37:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:59.976 00:37:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:59.976 1+0 records in 00:23:59.976 1+0 records out 00:23:59.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000837158 s, 4.9 MB/s 00:23:59.976 00:37:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.976 00:37:53 -- common/autotest_common.sh@872 -- # size=4096 00:23:59.976 00:37:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:59.976 00:37:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:59.976 00:37:53 -- common/autotest_common.sh@875 -- # return 0 00:23:59.976 00:37:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:59.976 00:37:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:59.976 00:37:53 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:00.234 00:37:53 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:00.234 00:37:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:00.234 00:37:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:00.234 00:37:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:00.234 00:37:53 -- bdev/nbd_common.sh@51 -- # local i 00:24:00.234 00:37:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:00.234 00:37:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@41 -- # break 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@45 -- # return 0 00:24:00.492 00:37:54 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@51 -- # local i 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:00.492 00:37:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:00.750 00:37:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:00.750 00:37:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:00.750 00:37:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:00.750 00:37:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:00.750 00:37:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:00.750 00:37:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:00.750 00:37:54 -- bdev/nbd_common.sh@41 -- # break 00:24:00.750 00:37:54 -- bdev/nbd_common.sh@45 -- # return 0 00:24:00.750 00:37:54 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:01.008 00:37:54 -- bdev/bdev_raid.sh@709 -- # killprocess 132421 00:24:01.008 00:37:54 -- common/autotest_common.sh@936 -- # '[' -z 132421 ']' 00:24:01.008 00:37:54 -- common/autotest_common.sh@940 -- # kill -0 132421 00:24:01.008 00:37:54 -- common/autotest_common.sh@941 -- # uname 00:24:01.008 00:37:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:01.008 00:37:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132421 00:24:01.008 00:37:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:01.008 00:37:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:01.008 00:37:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132421' 00:24:01.008 killing process with pid 132421 00:24:01.008 00:37:54 -- common/autotest_common.sh@955 -- # kill 132421 00:24:01.008 Received shutdown signal, test time was about 13.856953 seconds 00:24:01.008 00:24:01.008 Latency(us) 00:24:01.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.008 =================================================================================================================== 00:24:01.008 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.008 [2024-04-24 00:37:54.564323] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:01.008 00:37:54 -- common/autotest_common.sh@960 -- # wait 132421 00:24:01.267 [2024-04-24 00:37:54.819388] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:02.642 ************************************ 00:24:02.642 END TEST raid_rebuild_test_io 00:24:02.642 ************************************ 00:24:02.642 00:37:56 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:02.642 00:24:02.642 real 0m19.563s 00:24:02.642 user 0m29.142s 00:24:02.642 sys 0m2.522s 00:24:02.642 00:37:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:02.642 00:37:56 -- common/autotest_common.sh@10 -- # set +x 00:24:02.642 00:37:56 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:24:02.642 00:37:56 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:24:02.642 00:37:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:02.642 00:37:56 -- common/autotest_common.sh@10 -- # set +x 00:24:02.900 ************************************ 00:24:02.900 START TEST raid_rebuild_test_sb_io 00:24:02.900 ************************************ 00:24:02.900 00:37:56 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 true true 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@544 -- # raid_pid=132924 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:02.900 00:37:56 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132924 /var/tmp/spdk-raid.sock 00:24:02.900 00:37:56 -- common/autotest_common.sh@817 -- # '[' -z 132924 ']' 00:24:02.900 00:37:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:02.900 00:37:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:02.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:02.900 00:37:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:02.900 00:37:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:02.900 00:37:56 -- common/autotest_common.sh@10 -- # set +x 00:24:02.900 [2024-04-24 00:37:56.530125] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:24:02.900 [2024-04-24 00:37:56.530327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132924 ] 00:24:02.900 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:02.900 Zero copy mechanism will not be used. 00:24:03.158 [2024-04-24 00:37:56.706516] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.159 [2024-04-24 00:37:56.924529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.416 [2024-04-24 00:37:57.148351] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:03.981 00:37:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:03.981 00:37:57 -- common/autotest_common.sh@850 -- # return 0 00:24:03.981 00:37:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:03.981 00:37:57 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:03.981 00:37:57 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:04.239 BaseBdev1_malloc 00:24:04.239 00:37:57 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:04.497 [2024-04-24 00:37:58.103471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:04.497 [2024-04-24 00:37:58.103588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.497 [2024-04-24 00:37:58.103628] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:24:04.497 [2024-04-24 00:37:58.103678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.497 [2024-04-24 00:37:58.106210] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.497 [2024-04-24 00:37:58.106261] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:04.497 BaseBdev1 00:24:04.497 00:37:58 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:04.497 00:37:58 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:04.497 00:37:58 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:04.754 BaseBdev2_malloc 00:24:04.754 00:37:58 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:05.011 [2024-04-24 00:37:58.643069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:05.011 [2024-04-24 00:37:58.643159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.011 [2024-04-24 00:37:58.643207] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:05.011 [2024-04-24 00:37:58.643281] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.011 [2024-04-24 00:37:58.645805] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.011 [2024-04-24 00:37:58.645861] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:05.011 BaseBdev2 00:24:05.011 00:37:58 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:05.269 spare_malloc 00:24:05.269 00:37:58 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:05.526 spare_delay 00:24:05.526 00:37:59 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:05.784 [2024-04-24 00:37:59.385440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:05.784 [2024-04-24 00:37:59.385530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.784 [2024-04-24 00:37:59.385578] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:24:05.784 [2024-04-24 00:37:59.385630] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.784 [2024-04-24 00:37:59.388243] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.784 [2024-04-24 00:37:59.388302] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:05.784 spare 00:24:05.784 00:37:59 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:24:06.043 [2024-04-24 00:37:59.593588] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:06.043 [2024-04-24 00:37:59.595818] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:06.043 [2024-04-24 00:37:59.596058] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:24:06.043 [2024-04-24 00:37:59.596079] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:06.043 [2024-04-24 00:37:59.596233] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:06.043 [2024-04-24 00:37:59.596584] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:24:06.043 [2024-04-24 00:37:59.596603] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:24:06.043 [2024-04-24 00:37:59.596780] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.043 00:37:59 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:06.043 00:37:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:06.043 00:37:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:06.043 00:37:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:06.043 00:37:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:06.043 00:37:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:06.043 00:37:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:06.043 00:37:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:06.043 00:37:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:06.043 00:37:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:06.043 00:37:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.043 00:37:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.302 00:37:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:06.302 "name": "raid_bdev1", 00:24:06.302 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:06.302 "strip_size_kb": 0, 00:24:06.302 "state": "online", 00:24:06.302 "raid_level": "raid1", 00:24:06.302 "superblock": true, 00:24:06.302 "num_base_bdevs": 2, 00:24:06.302 "num_base_bdevs_discovered": 2, 00:24:06.302 "num_base_bdevs_operational": 2, 00:24:06.302 "base_bdevs_list": [ 00:24:06.302 { 00:24:06.302 "name": "BaseBdev1", 00:24:06.302 "uuid": "56a18c31-f2c8-58d2-8eb4-52851d39ff99", 00:24:06.302 "is_configured": true, 00:24:06.302 "data_offset": 2048, 00:24:06.302 "data_size": 63488 00:24:06.302 }, 00:24:06.302 { 00:24:06.302 "name": "BaseBdev2", 00:24:06.302 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:06.302 "is_configured": true, 00:24:06.302 "data_offset": 2048, 00:24:06.302 "data_size": 63488 00:24:06.302 } 00:24:06.302 ] 00:24:06.302 }' 00:24:06.302 00:37:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:06.302 00:37:59 -- common/autotest_common.sh@10 -- # set +x 00:24:06.869 00:38:00 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:06.869 00:38:00 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:07.128 [2024-04-24 00:38:00.686016] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:07.128 00:38:00 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:24:07.128 00:38:00 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.128 00:38:00 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:07.437 00:38:00 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:07.437 00:38:00 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:24:07.437 00:38:00 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:07.437 00:38:00 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:07.437 [2024-04-24 00:38:01.110541] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:24:07.437 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:07.437 Zero copy mechanism will not be used. 00:24:07.437 Running I/O for 60 seconds... 00:24:07.717 [2024-04-24 00:38:01.217496] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:07.717 [2024-04-24 00:38:01.224286] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:07.717 "name": "raid_bdev1", 00:24:07.717 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:07.717 "strip_size_kb": 0, 00:24:07.717 "state": "online", 00:24:07.717 "raid_level": "raid1", 00:24:07.717 "superblock": true, 00:24:07.717 "num_base_bdevs": 2, 00:24:07.717 "num_base_bdevs_discovered": 1, 00:24:07.717 "num_base_bdevs_operational": 1, 00:24:07.717 "base_bdevs_list": [ 00:24:07.717 { 00:24:07.717 "name": null, 00:24:07.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.717 "is_configured": false, 00:24:07.717 "data_offset": 2048, 00:24:07.717 "data_size": 63488 00:24:07.717 }, 00:24:07.717 { 00:24:07.717 "name": "BaseBdev2", 00:24:07.717 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:07.717 "is_configured": true, 00:24:07.717 "data_offset": 2048, 00:24:07.717 "data_size": 63488 00:24:07.717 } 00:24:07.717 ] 00:24:07.717 }' 00:24:07.717 00:38:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:07.717 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:24:08.649 00:38:02 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:08.649 [2024-04-24 00:38:02.301279] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:08.649 [2024-04-24 00:38:02.301339] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:08.649 00:38:02 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:08.649 [2024-04-24 00:38:02.361598] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:24:08.649 [2024-04-24 00:38:02.363854] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:08.907 [2024-04-24 00:38:02.480500] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:08.907 [2024-04-24 00:38:02.481063] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:09.164 [2024-04-24 00:38:02.715551] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:09.164 [2024-04-24 00:38:02.715870] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:09.475 [2024-04-24 00:38:03.070215] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:09.475 [2024-04-24 00:38:03.070782] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:09.734 [2024-04-24 00:38:03.282072] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:09.734 [2024-04-24 00:38:03.282400] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:09.734 00:38:03 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.734 00:38:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:09.734 00:38:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:09.734 00:38:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:09.734 00:38:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:09.734 00:38:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.734 00:38:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.991 00:38:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:09.991 "name": "raid_bdev1", 00:24:09.991 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:09.991 "strip_size_kb": 0, 00:24:09.991 "state": "online", 00:24:09.991 "raid_level": "raid1", 00:24:09.991 "superblock": true, 00:24:09.991 "num_base_bdevs": 2, 00:24:09.991 "num_base_bdevs_discovered": 2, 00:24:09.991 "num_base_bdevs_operational": 2, 00:24:09.991 "process": { 00:24:09.991 "type": "rebuild", 00:24:09.991 "target": "spare", 00:24:09.991 "progress": { 00:24:09.991 "blocks": 12288, 00:24:09.991 "percent": 19 00:24:09.991 } 00:24:09.991 }, 00:24:09.991 "base_bdevs_list": [ 00:24:09.991 { 00:24:09.991 "name": "spare", 00:24:09.991 "uuid": "b0c0e0f9-6476-5f35-a5fe-634d4b98de6e", 00:24:09.991 "is_configured": true, 00:24:09.991 "data_offset": 2048, 00:24:09.991 "data_size": 63488 00:24:09.991 }, 00:24:09.991 { 00:24:09.991 "name": "BaseBdev2", 00:24:09.991 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:09.991 "is_configured": true, 00:24:09.991 "data_offset": 2048, 00:24:09.991 "data_size": 63488 00:24:09.991 } 00:24:09.991 ] 00:24:09.991 }' 00:24:09.991 00:38:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:09.991 [2024-04-24 00:38:03.619105] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:09.991 [2024-04-24 00:38:03.626628] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:09.991 00:38:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.991 00:38:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:09.991 00:38:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.991 00:38:03 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:10.248 [2024-04-24 00:38:03.957040] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:10.506 [2024-04-24 00:38:04.120075] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:10.506 [2024-04-24 00:38:04.122999] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:10.506 [2024-04-24 00:38:04.177286] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:24:10.506 00:38:04 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:10.506 00:38:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:10.506 00:38:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:10.506 00:38:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:10.506 00:38:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:10.506 00:38:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:24:10.506 00:38:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:10.506 00:38:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:10.506 00:38:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:10.506 00:38:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:10.506 00:38:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.506 00:38:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.765 00:38:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:10.765 "name": "raid_bdev1", 00:24:10.765 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:10.765 "strip_size_kb": 0, 00:24:10.765 "state": "online", 00:24:10.765 "raid_level": "raid1", 00:24:10.765 "superblock": true, 00:24:10.765 "num_base_bdevs": 2, 00:24:10.765 "num_base_bdevs_discovered": 1, 00:24:10.765 "num_base_bdevs_operational": 1, 00:24:10.765 "base_bdevs_list": [ 00:24:10.765 { 00:24:10.765 "name": null, 00:24:10.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.765 "is_configured": false, 00:24:10.765 "data_offset": 2048, 00:24:10.765 "data_size": 63488 00:24:10.765 }, 00:24:10.765 { 00:24:10.765 "name": "BaseBdev2", 00:24:10.765 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:10.765 "is_configured": true, 00:24:10.765 "data_offset": 2048, 00:24:10.765 "data_size": 63488 00:24:10.765 } 00:24:10.765 ] 00:24:10.765 }' 00:24:10.765 00:38:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:10.765 00:38:04 -- common/autotest_common.sh@10 -- # set +x 00:24:11.331 00:38:05 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:11.331 00:38:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:11.331 00:38:05 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:11.331 00:38:05 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:11.331 00:38:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:11.331 00:38:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.331 00:38:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.637 00:38:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:11.637 "name": "raid_bdev1", 00:24:11.637 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:11.637 "strip_size_kb": 0, 00:24:11.637 "state": "online", 00:24:11.637 "raid_level": "raid1", 00:24:11.637 "superblock": true, 00:24:11.637 "num_base_bdevs": 2, 00:24:11.637 "num_base_bdevs_discovered": 1, 00:24:11.637 "num_base_bdevs_operational": 1, 00:24:11.637 "base_bdevs_list": [ 00:24:11.637 { 00:24:11.637 "name": null, 00:24:11.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.637 "is_configured": false, 00:24:11.637 "data_offset": 2048, 00:24:11.637 "data_size": 63488 00:24:11.637 }, 00:24:11.637 { 00:24:11.637 "name": "BaseBdev2", 00:24:11.637 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:11.637 "is_configured": true, 00:24:11.637 "data_offset": 2048, 00:24:11.637 "data_size": 63488 00:24:11.637 } 00:24:11.637 ] 00:24:11.637 }' 00:24:11.637 00:38:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:11.895 00:38:05 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:11.895 00:38:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:11.895 00:38:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:11.895 00:38:05 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:12.154 [2024-04-24 00:38:05.777046] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:12.154 [2024-04-24 00:38:05.777106] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:12.154 00:38:05 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:12.154 [2024-04-24 00:38:05.835979] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:12.154 [2024-04-24 00:38:05.838202] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:12.412 [2024-04-24 00:38:05.962264] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:12.412 [2024-04-24 00:38:05.962825] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:12.412 [2024-04-24 00:38:06.167234] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:12.412 [2024-04-24 00:38:06.167556] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:12.979 [2024-04-24 00:38:06.502555] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:12.979 [2024-04-24 00:38:06.503104] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:12.979 [2024-04-24 00:38:06.711826] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:12.979 [2024-04-24 00:38:06.712149] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:13.239 00:38:06 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.239 00:38:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:13.239 00:38:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:13.239 00:38:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:13.239 00:38:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:13.239 00:38:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.239 00:38:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.497 [2024-04-24 00:38:07.050701] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:13.497 [2024-04-24 00:38:07.051282] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:13.497 00:38:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:13.497 "name": "raid_bdev1", 00:24:13.497 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:13.497 "strip_size_kb": 0, 00:24:13.497 "state": "online", 00:24:13.497 "raid_level": "raid1", 00:24:13.497 "superblock": true, 00:24:13.497 "num_base_bdevs": 2, 00:24:13.497 "num_base_bdevs_discovered": 2, 00:24:13.497 "num_base_bdevs_operational": 2, 00:24:13.497 "process": { 00:24:13.497 "type": "rebuild", 00:24:13.497 "target": "spare", 00:24:13.497 "progress": { 00:24:13.497 "blocks": 12288, 00:24:13.497 "percent": 19 00:24:13.497 } 00:24:13.497 }, 00:24:13.497 "base_bdevs_list": [ 00:24:13.497 { 00:24:13.497 "name": "spare", 00:24:13.497 "uuid": "b0c0e0f9-6476-5f35-a5fe-634d4b98de6e", 00:24:13.497 "is_configured": true, 00:24:13.497 "data_offset": 2048, 00:24:13.497 "data_size": 63488 00:24:13.497 }, 00:24:13.497 { 00:24:13.497 "name": "BaseBdev2", 00:24:13.498 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:13.498 "is_configured": true, 00:24:13.498 "data_offset": 2048, 00:24:13.498 "data_size": 63488 00:24:13.498 } 00:24:13.498 ] 00:24:13.498 }' 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:13.498 [2024-04-24 00:38:07.169733] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:13.498 [2024-04-24 00:38:07.170058] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:13.498 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@657 -- # local timeout=510 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.498 00:38:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.761 00:38:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:13.761 "name": "raid_bdev1", 00:24:13.761 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:13.761 "strip_size_kb": 0, 00:24:13.761 "state": "online", 00:24:13.761 "raid_level": "raid1", 00:24:13.761 "superblock": true, 00:24:13.761 "num_base_bdevs": 2, 00:24:13.761 "num_base_bdevs_discovered": 2, 00:24:13.761 "num_base_bdevs_operational": 2, 00:24:13.761 "process": { 00:24:13.761 "type": "rebuild", 00:24:13.761 "target": "spare", 00:24:13.761 "progress": { 00:24:13.761 "blocks": 16384, 00:24:13.761 "percent": 25 00:24:13.761 } 00:24:13.761 }, 00:24:13.761 "base_bdevs_list": [ 00:24:13.761 { 00:24:13.761 "name": "spare", 00:24:13.761 "uuid": "b0c0e0f9-6476-5f35-a5fe-634d4b98de6e", 00:24:13.761 "is_configured": true, 00:24:13.761 "data_offset": 2048, 00:24:13.761 "data_size": 63488 00:24:13.761 }, 00:24:13.761 { 00:24:13.761 "name": "BaseBdev2", 00:24:13.761 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:13.761 "is_configured": true, 00:24:13.761 "data_offset": 2048, 00:24:13.761 "data_size": 63488 00:24:13.761 } 00:24:13.761 ] 00:24:13.761 }' 00:24:13.761 00:38:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:13.761 00:38:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:13.761 00:38:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:13.761 00:38:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:13.761 00:38:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:14.333 [2024-04-24 00:38:07.921313] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:14.898 00:38:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:14.898 00:38:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:14.898 00:38:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:14.898 00:38:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:14.898 00:38:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:14.898 00:38:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:14.898 00:38:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.898 00:38:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.898 [2024-04-24 00:38:08.592333] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:24:15.157 [2024-04-24 00:38:08.810996] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:24:15.157 00:38:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:15.157 "name": "raid_bdev1", 00:24:15.157 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:15.157 "strip_size_kb": 0, 00:24:15.157 "state": "online", 00:24:15.157 "raid_level": "raid1", 00:24:15.157 "superblock": true, 00:24:15.157 "num_base_bdevs": 2, 00:24:15.157 "num_base_bdevs_discovered": 2, 00:24:15.157 "num_base_bdevs_operational": 2, 00:24:15.157 "process": { 00:24:15.157 "type": "rebuild", 00:24:15.157 "target": "spare", 00:24:15.157 "progress": { 00:24:15.157 "blocks": 38912, 00:24:15.157 "percent": 61 00:24:15.157 } 00:24:15.157 }, 00:24:15.157 "base_bdevs_list": [ 00:24:15.157 { 00:24:15.157 "name": "spare", 00:24:15.157 "uuid": "b0c0e0f9-6476-5f35-a5fe-634d4b98de6e", 00:24:15.157 "is_configured": true, 00:24:15.157 "data_offset": 2048, 00:24:15.157 "data_size": 63488 00:24:15.157 }, 00:24:15.157 { 00:24:15.157 "name": "BaseBdev2", 00:24:15.157 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:15.157 "is_configured": true, 00:24:15.157 "data_offset": 2048, 00:24:15.157 "data_size": 63488 00:24:15.157 } 00:24:15.157 ] 00:24:15.157 }' 00:24:15.157 00:38:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:15.157 00:38:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.157 00:38:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:15.157 00:38:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.157 00:38:08 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:15.722 [2024-04-24 00:38:09.445089] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:24:15.979 [2024-04-24 00:38:09.553205] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:15.979 [2024-04-24 00:38:09.553549] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:24:16.236 00:38:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:16.236 00:38:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.236 00:38:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:16.236 00:38:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:16.236 00:38:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:16.236 00:38:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:16.236 00:38:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.236 00:38:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.236 [2024-04-24 00:38:09.997825] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:24:16.801 00:38:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:16.801 "name": "raid_bdev1", 00:24:16.801 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:16.801 "strip_size_kb": 0, 00:24:16.801 "state": "online", 00:24:16.801 "raid_level": "raid1", 00:24:16.801 "superblock": true, 00:24:16.801 "num_base_bdevs": 2, 00:24:16.801 "num_base_bdevs_discovered": 2, 00:24:16.801 "num_base_bdevs_operational": 2, 00:24:16.801 "process": { 00:24:16.801 "type": "rebuild", 00:24:16.801 "target": "spare", 00:24:16.801 "progress": { 00:24:16.801 "blocks": 61440, 00:24:16.801 "percent": 96 00:24:16.801 } 00:24:16.801 }, 00:24:16.801 "base_bdevs_list": [ 00:24:16.801 { 00:24:16.801 "name": "spare", 00:24:16.801 "uuid": "b0c0e0f9-6476-5f35-a5fe-634d4b98de6e", 00:24:16.801 "is_configured": true, 00:24:16.801 "data_offset": 2048, 00:24:16.801 "data_size": 63488 00:24:16.801 }, 00:24:16.801 { 00:24:16.801 "name": "BaseBdev2", 00:24:16.801 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:16.801 "is_configured": true, 00:24:16.801 "data_offset": 2048, 00:24:16.801 "data_size": 63488 00:24:16.801 } 00:24:16.801 ] 00:24:16.801 }' 00:24:16.801 00:38:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:16.801 [2024-04-24 00:38:10.325770] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:16.801 00:38:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.801 00:38:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:16.801 00:38:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.801 00:38:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:16.801 [2024-04-24 00:38:10.439748] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:16.801 [2024-04-24 00:38:10.442182] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:17.733 00:38:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:17.733 00:38:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.733 00:38:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:17.733 00:38:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:17.733 00:38:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:17.733 00:38:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:17.733 00:38:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.733 00:38:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.990 00:38:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:17.990 "name": "raid_bdev1", 00:24:17.990 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:17.990 "strip_size_kb": 0, 00:24:17.990 "state": "online", 00:24:17.990 "raid_level": "raid1", 00:24:17.990 "superblock": true, 00:24:17.990 "num_base_bdevs": 2, 00:24:17.990 "num_base_bdevs_discovered": 2, 00:24:17.990 "num_base_bdevs_operational": 2, 00:24:17.990 "base_bdevs_list": [ 00:24:17.990 { 00:24:17.990 "name": "spare", 00:24:17.990 "uuid": "b0c0e0f9-6476-5f35-a5fe-634d4b98de6e", 00:24:17.990 "is_configured": true, 00:24:17.990 "data_offset": 2048, 00:24:17.990 "data_size": 63488 00:24:17.990 }, 00:24:17.990 { 00:24:17.990 "name": "BaseBdev2", 00:24:17.990 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:17.990 "is_configured": true, 00:24:17.990 "data_offset": 2048, 00:24:17.990 "data_size": 63488 00:24:17.990 } 00:24:17.990 ] 00:24:17.990 }' 00:24:17.990 00:38:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:17.990 00:38:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:17.990 00:38:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:18.247 00:38:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:18.247 00:38:11 -- bdev/bdev_raid.sh@660 -- # break 00:24:18.247 00:38:11 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:18.247 00:38:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:18.247 00:38:11 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:18.247 00:38:11 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:18.247 00:38:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:18.247 00:38:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.247 00:38:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:18.504 "name": "raid_bdev1", 00:24:18.504 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:18.504 "strip_size_kb": 0, 00:24:18.504 "state": "online", 00:24:18.504 "raid_level": "raid1", 00:24:18.504 "superblock": true, 00:24:18.504 "num_base_bdevs": 2, 00:24:18.504 "num_base_bdevs_discovered": 2, 00:24:18.504 "num_base_bdevs_operational": 2, 00:24:18.504 "base_bdevs_list": [ 00:24:18.504 { 00:24:18.504 "name": "spare", 00:24:18.504 "uuid": "b0c0e0f9-6476-5f35-a5fe-634d4b98de6e", 00:24:18.504 "is_configured": true, 00:24:18.504 "data_offset": 2048, 00:24:18.504 "data_size": 63488 00:24:18.504 }, 00:24:18.504 { 00:24:18.504 "name": "BaseBdev2", 00:24:18.504 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:18.504 "is_configured": true, 00:24:18.504 "data_offset": 2048, 00:24:18.504 "data_size": 63488 00:24:18.504 } 00:24:18.504 ] 00:24:18.504 }' 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.504 00:38:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.067 00:38:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:19.067 "name": "raid_bdev1", 00:24:19.067 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:19.067 "strip_size_kb": 0, 00:24:19.067 "state": "online", 00:24:19.067 "raid_level": "raid1", 00:24:19.067 "superblock": true, 00:24:19.067 "num_base_bdevs": 2, 00:24:19.067 "num_base_bdevs_discovered": 2, 00:24:19.067 "num_base_bdevs_operational": 2, 00:24:19.067 "base_bdevs_list": [ 00:24:19.067 { 00:24:19.067 "name": "spare", 00:24:19.067 "uuid": "b0c0e0f9-6476-5f35-a5fe-634d4b98de6e", 00:24:19.067 "is_configured": true, 00:24:19.067 "data_offset": 2048, 00:24:19.067 "data_size": 63488 00:24:19.067 }, 00:24:19.067 { 00:24:19.067 "name": "BaseBdev2", 00:24:19.067 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:19.067 "is_configured": true, 00:24:19.067 "data_offset": 2048, 00:24:19.067 "data_size": 63488 00:24:19.067 } 00:24:19.067 ] 00:24:19.067 }' 00:24:19.067 00:38:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:19.067 00:38:12 -- common/autotest_common.sh@10 -- # set +x 00:24:19.632 00:38:13 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:19.632 [2024-04-24 00:38:13.411947] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:19.632 [2024-04-24 00:38:13.411990] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:19.889 00:24:19.889 Latency(us) 00:24:19.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.889 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:19.890 raid_bdev1 : 12.39 93.83 281.49 0.00 0.00 14912.96 356.94 111348.78 00:24:19.890 =================================================================================================================== 00:24:19.890 Total : 93.83 281.49 0.00 0.00 14912.96 356.94 111348.78 00:24:19.890 [2024-04-24 00:38:13.532327] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.890 [2024-04-24 00:38:13.532381] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:19.890 [2024-04-24 00:38:13.532471] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:19.890 [2024-04-24 00:38:13.532483] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:24:19.890 0 00:24:19.890 00:38:13 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.890 00:38:13 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:20.148 00:38:13 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:20.148 00:38:13 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:24:20.148 00:38:13 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:20.148 00:38:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:20.148 00:38:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:20.148 00:38:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:20.148 00:38:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:20.148 00:38:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:20.148 00:38:13 -- bdev/nbd_common.sh@12 -- # local i 00:24:20.148 00:38:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:20.148 00:38:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:20.148 00:38:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:20.406 /dev/nbd0 00:24:20.406 00:38:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:20.406 00:38:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:20.406 00:38:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:20.406 00:38:14 -- common/autotest_common.sh@855 -- # local i 00:24:20.406 00:38:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:20.406 00:38:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:20.406 00:38:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:20.406 00:38:14 -- common/autotest_common.sh@859 -- # break 00:24:20.406 00:38:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:20.406 00:38:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:20.406 00:38:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:20.406 1+0 records in 00:24:20.406 1+0 records out 00:24:20.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288912 s, 14.2 MB/s 00:24:20.406 00:38:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:20.407 00:38:14 -- common/autotest_common.sh@872 -- # size=4096 00:24:20.407 00:38:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:20.407 00:38:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:20.407 00:38:14 -- common/autotest_common.sh@875 -- # return 0 00:24:20.407 00:38:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:20.407 00:38:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:20.407 00:38:14 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:24:20.407 00:38:14 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:24:20.407 00:38:14 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:24:20.407 00:38:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:20.407 00:38:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:24:20.407 00:38:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:20.407 00:38:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:20.407 00:38:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:20.407 00:38:14 -- bdev/nbd_common.sh@12 -- # local i 00:24:20.407 00:38:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:20.407 00:38:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:20.407 00:38:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:24:20.666 /dev/nbd1 00:24:20.666 00:38:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:20.666 00:38:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:20.666 00:38:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:24:20.666 00:38:14 -- common/autotest_common.sh@855 -- # local i 00:24:20.666 00:38:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:20.666 00:38:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:20.666 00:38:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:24:20.666 00:38:14 -- common/autotest_common.sh@859 -- # break 00:24:20.666 00:38:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:20.666 00:38:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:20.666 00:38:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:20.666 1+0 records in 00:24:20.666 1+0 records out 00:24:20.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294776 s, 13.9 MB/s 00:24:20.666 00:38:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:20.666 00:38:14 -- common/autotest_common.sh@872 -- # size=4096 00:24:20.666 00:38:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:20.666 00:38:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:20.666 00:38:14 -- common/autotest_common.sh@875 -- # return 0 00:24:20.666 00:38:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:20.666 00:38:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:20.666 00:38:14 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:20.925 00:38:14 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:20.925 00:38:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:20.925 00:38:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:20.925 00:38:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:20.925 00:38:14 -- bdev/nbd_common.sh@51 -- # local i 00:24:20.925 00:38:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:20.925 00:38:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@41 -- # break 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@45 -- # return 0 00:24:21.184 00:38:14 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@51 -- # local i 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:21.184 00:38:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:21.442 00:38:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:21.442 00:38:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:21.442 00:38:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:21.442 00:38:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:21.442 00:38:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:21.442 00:38:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:21.442 00:38:15 -- bdev/nbd_common.sh@41 -- # break 00:24:21.442 00:38:15 -- bdev/nbd_common.sh@45 -- # return 0 00:24:21.442 00:38:15 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:21.442 00:38:15 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:21.442 00:38:15 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:21.442 00:38:15 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:21.442 00:38:15 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:22.009 [2024-04-24 00:38:15.499628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:22.009 [2024-04-24 00:38:15.499730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:22.009 [2024-04-24 00:38:15.499763] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:24:22.009 [2024-04-24 00:38:15.499790] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:22.009 [2024-04-24 00:38:15.502378] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:22.009 [2024-04-24 00:38:15.502454] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:22.009 [2024-04-24 00:38:15.502581] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:22.009 [2024-04-24 00:38:15.502641] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:22.009 BaseBdev1 00:24:22.009 00:38:15 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:22.009 00:38:15 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:22.009 00:38:15 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:22.268 00:38:15 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:22.526 [2024-04-24 00:38:16.087797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:22.526 [2024-04-24 00:38:16.087888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:22.526 [2024-04-24 00:38:16.087940] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:22.526 [2024-04-24 00:38:16.087976] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:22.526 [2024-04-24 00:38:16.088467] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:22.526 [2024-04-24 00:38:16.088526] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:22.526 [2024-04-24 00:38:16.088650] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:22.526 [2024-04-24 00:38:16.088662] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:22.526 [2024-04-24 00:38:16.088670] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:22.526 [2024-04-24 00:38:16.088688] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:24:22.526 [2024-04-24 00:38:16.088749] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:22.526 BaseBdev2 00:24:22.526 00:38:16 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:22.785 00:38:16 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:23.043 [2024-04-24 00:38:16.663994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:23.043 [2024-04-24 00:38:16.664084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:23.043 [2024-04-24 00:38:16.664154] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:24:23.043 [2024-04-24 00:38:16.664188] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:23.043 [2024-04-24 00:38:16.664739] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:23.043 [2024-04-24 00:38:16.664825] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:23.043 [2024-04-24 00:38:16.664976] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:23.043 [2024-04-24 00:38:16.665001] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:23.043 spare 00:24:23.043 00:38:16 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:23.043 00:38:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:23.043 00:38:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:23.043 00:38:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:23.043 00:38:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:23.043 00:38:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:23.043 00:38:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:23.043 00:38:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:23.043 00:38:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:23.043 00:38:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:23.043 00:38:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.043 00:38:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.043 [2024-04-24 00:38:16.765103] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:24:23.043 [2024-04-24 00:38:16.765145] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:23.043 [2024-04-24 00:38:16.765308] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:24:23.043 [2024-04-24 00:38:16.765740] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:24:23.043 [2024-04-24 00:38:16.765771] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:24:23.043 [2024-04-24 00:38:16.765932] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.300 00:38:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:23.300 "name": "raid_bdev1", 00:24:23.300 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:23.300 "strip_size_kb": 0, 00:24:23.300 "state": "online", 00:24:23.300 "raid_level": "raid1", 00:24:23.300 "superblock": true, 00:24:23.300 "num_base_bdevs": 2, 00:24:23.300 "num_base_bdevs_discovered": 2, 00:24:23.300 "num_base_bdevs_operational": 2, 00:24:23.300 "base_bdevs_list": [ 00:24:23.300 { 00:24:23.300 "name": "spare", 00:24:23.300 "uuid": "b0c0e0f9-6476-5f35-a5fe-634d4b98de6e", 00:24:23.300 "is_configured": true, 00:24:23.300 "data_offset": 2048, 00:24:23.300 "data_size": 63488 00:24:23.300 }, 00:24:23.300 { 00:24:23.300 "name": "BaseBdev2", 00:24:23.300 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:23.300 "is_configured": true, 00:24:23.300 "data_offset": 2048, 00:24:23.300 "data_size": 63488 00:24:23.300 } 00:24:23.300 ] 00:24:23.300 }' 00:24:23.300 00:38:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:23.300 00:38:16 -- common/autotest_common.sh@10 -- # set +x 00:24:23.865 00:38:17 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:23.865 00:38:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:23.865 00:38:17 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:23.865 00:38:17 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:23.865 00:38:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:23.865 00:38:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.865 00:38:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.124 00:38:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:24.124 "name": "raid_bdev1", 00:24:24.124 "uuid": "eb5b026c-9ca8-4e72-a9b5-fd78ce092e5a", 00:24:24.124 "strip_size_kb": 0, 00:24:24.124 "state": "online", 00:24:24.124 "raid_level": "raid1", 00:24:24.124 "superblock": true, 00:24:24.124 "num_base_bdevs": 2, 00:24:24.124 "num_base_bdevs_discovered": 2, 00:24:24.124 "num_base_bdevs_operational": 2, 00:24:24.124 "base_bdevs_list": [ 00:24:24.124 { 00:24:24.124 "name": "spare", 00:24:24.124 "uuid": "b0c0e0f9-6476-5f35-a5fe-634d4b98de6e", 00:24:24.124 "is_configured": true, 00:24:24.124 "data_offset": 2048, 00:24:24.124 "data_size": 63488 00:24:24.124 }, 00:24:24.124 { 00:24:24.124 "name": "BaseBdev2", 00:24:24.124 "uuid": "4fe66b3c-fbba-50c4-899a-99beedbfd6bc", 00:24:24.124 "is_configured": true, 00:24:24.124 "data_offset": 2048, 00:24:24.124 "data_size": 63488 00:24:24.124 } 00:24:24.124 ] 00:24:24.124 }' 00:24:24.124 00:38:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:24.382 00:38:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:24.382 00:38:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:24.382 00:38:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:24.382 00:38:17 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.382 00:38:17 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:24.640 00:38:18 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:24.640 00:38:18 -- bdev/bdev_raid.sh@709 -- # killprocess 132924 00:24:24.640 00:38:18 -- common/autotest_common.sh@936 -- # '[' -z 132924 ']' 00:24:24.640 00:38:18 -- common/autotest_common.sh@940 -- # kill -0 132924 00:24:24.640 00:38:18 -- common/autotest_common.sh@941 -- # uname 00:24:24.640 00:38:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:24.640 00:38:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132924 00:24:24.640 00:38:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:24.640 00:38:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:24.640 killing process with pid 132924 00:24:24.640 00:38:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132924' 00:24:24.640 00:38:18 -- common/autotest_common.sh@955 -- # kill 132924 00:24:24.640 Received shutdown signal, test time was about 17.170191 seconds 00:24:24.640 00:24:24.640 Latency(us) 00:24:24.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.640 =================================================================================================================== 00:24:24.640 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:24.640 [2024-04-24 00:38:18.283268] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:24.640 [2024-04-24 00:38:18.283407] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:24.640 [2024-04-24 00:38:18.283517] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:24.640 [2024-04-24 00:38:18.283536] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:24:24.640 00:38:18 -- common/autotest_common.sh@960 -- # wait 132924 00:24:24.898 [2024-04-24 00:38:18.547302] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:26.797 00:24:26.797 real 0m23.650s 00:24:26.797 user 0m36.902s 00:24:26.797 sys 0m2.872s 00:24:26.797 ************************************ 00:24:26.797 END TEST raid_rebuild_test_sb_io 00:24:26.797 ************************************ 00:24:26.797 00:38:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:26.797 00:38:20 -- common/autotest_common.sh@10 -- # set +x 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:24:26.797 00:38:20 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:24:26.797 00:38:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:26.797 00:38:20 -- common/autotest_common.sh@10 -- # set +x 00:24:26.797 ************************************ 00:24:26.797 START TEST raid_rebuild_test 00:24:26.797 ************************************ 00:24:26.797 00:38:20 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 false false 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@544 -- # raid_pid=133517 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:26.797 00:38:20 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133517 /var/tmp/spdk-raid.sock 00:24:26.797 00:38:20 -- common/autotest_common.sh@817 -- # '[' -z 133517 ']' 00:24:26.797 00:38:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:26.797 00:38:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:26.797 00:38:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:26.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:26.797 00:38:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:26.797 00:38:20 -- common/autotest_common.sh@10 -- # set +x 00:24:26.797 [2024-04-24 00:38:20.257475] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:24:26.797 [2024-04-24 00:38:20.257840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133517 ] 00:24:26.797 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:26.797 Zero copy mechanism will not be used. 00:24:26.797 [2024-04-24 00:38:20.456123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.055 [2024-04-24 00:38:20.698949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.341 [2024-04-24 00:38:20.927398] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:27.599 00:38:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:27.599 00:38:21 -- common/autotest_common.sh@850 -- # return 0 00:24:27.599 00:38:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:27.599 00:38:21 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:27.599 00:38:21 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:27.857 BaseBdev1 00:24:27.857 00:38:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:27.857 00:38:21 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:27.857 00:38:21 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:28.421 BaseBdev2 00:24:28.421 00:38:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:28.421 00:38:21 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:28.421 00:38:21 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:28.678 BaseBdev3 00:24:28.678 00:38:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:28.678 00:38:22 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:24:28.678 00:38:22 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:28.935 BaseBdev4 00:24:28.935 00:38:22 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:29.198 spare_malloc 00:24:29.198 00:38:22 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:29.508 spare_delay 00:24:29.508 00:38:23 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:29.789 [2024-04-24 00:38:23.387241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:29.789 [2024-04-24 00:38:23.387559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:29.789 [2024-04-24 00:38:23.387719] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:24:29.789 [2024-04-24 00:38:23.387866] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:29.789 [2024-04-24 00:38:23.390987] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:29.789 [2024-04-24 00:38:23.391192] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:29.789 spare 00:24:29.789 00:38:23 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:30.046 [2024-04-24 00:38:23.595627] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:30.046 [2024-04-24 00:38:23.598355] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:30.046 [2024-04-24 00:38:23.598586] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:30.046 [2024-04-24 00:38:23.598744] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:30.046 [2024-04-24 00:38:23.598942] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:24:30.046 [2024-04-24 00:38:23.599059] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:30.046 [2024-04-24 00:38:23.599293] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:30.046 [2024-04-24 00:38:23.599812] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:24:30.046 [2024-04-24 00:38:23.599940] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:24:30.046 [2024-04-24 00:38:23.600270] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:30.046 00:38:23 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:30.046 00:38:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:30.046 00:38:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:30.046 00:38:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:30.046 00:38:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:30.046 00:38:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:30.046 00:38:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:30.046 00:38:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:30.046 00:38:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:30.046 00:38:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:30.046 00:38:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.046 00:38:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.303 00:38:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:30.303 "name": "raid_bdev1", 00:24:30.303 "uuid": "fbe8f7a4-ae9c-4cc7-ba1a-4ce6eb823382", 00:24:30.303 "strip_size_kb": 0, 00:24:30.303 "state": "online", 00:24:30.303 "raid_level": "raid1", 00:24:30.303 "superblock": false, 00:24:30.303 "num_base_bdevs": 4, 00:24:30.303 "num_base_bdevs_discovered": 4, 00:24:30.303 "num_base_bdevs_operational": 4, 00:24:30.303 "base_bdevs_list": [ 00:24:30.303 { 00:24:30.303 "name": "BaseBdev1", 00:24:30.303 "uuid": "ba61df3e-a9d5-4244-94cb-93b75cdb61c2", 00:24:30.303 "is_configured": true, 00:24:30.303 "data_offset": 0, 00:24:30.303 "data_size": 65536 00:24:30.303 }, 00:24:30.303 { 00:24:30.303 "name": "BaseBdev2", 00:24:30.303 "uuid": "1c3094e7-5ac4-4f15-9876-26180c58c23a", 00:24:30.303 "is_configured": true, 00:24:30.303 "data_offset": 0, 00:24:30.303 "data_size": 65536 00:24:30.303 }, 00:24:30.303 { 00:24:30.303 "name": "BaseBdev3", 00:24:30.303 "uuid": "6e83add2-d02b-48c6-b43f-e1f9193655ea", 00:24:30.303 "is_configured": true, 00:24:30.303 "data_offset": 0, 00:24:30.303 "data_size": 65536 00:24:30.303 }, 00:24:30.303 { 00:24:30.303 "name": "BaseBdev4", 00:24:30.303 "uuid": "46b31666-9732-462f-ab73-94286ce72842", 00:24:30.303 "is_configured": true, 00:24:30.303 "data_offset": 0, 00:24:30.303 "data_size": 65536 00:24:30.303 } 00:24:30.303 ] 00:24:30.303 }' 00:24:30.303 00:38:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:30.303 00:38:23 -- common/autotest_common.sh@10 -- # set +x 00:24:30.867 00:38:24 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:30.867 00:38:24 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:31.124 [2024-04-24 00:38:24.720807] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:31.124 00:38:24 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:24:31.124 00:38:24 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.124 00:38:24 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:31.381 00:38:25 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:24:31.381 00:38:25 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:31.381 00:38:25 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:31.381 00:38:25 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:31.381 00:38:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:31.381 00:38:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:31.381 00:38:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:31.381 00:38:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:31.381 00:38:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:31.381 00:38:25 -- bdev/nbd_common.sh@12 -- # local i 00:24:31.381 00:38:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:31.381 00:38:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:31.381 00:38:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:31.638 [2024-04-24 00:38:25.352674] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:24:31.638 /dev/nbd0 00:24:31.638 00:38:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:31.638 00:38:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:31.638 00:38:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:31.638 00:38:25 -- common/autotest_common.sh@855 -- # local i 00:24:31.638 00:38:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:31.638 00:38:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:31.638 00:38:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:31.638 00:38:25 -- common/autotest_common.sh@859 -- # break 00:24:31.638 00:38:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:31.638 00:38:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:31.638 00:38:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:31.638 1+0 records in 00:24:31.638 1+0 records out 00:24:31.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456669 s, 9.0 MB/s 00:24:31.638 00:38:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:31.638 00:38:25 -- common/autotest_common.sh@872 -- # size=4096 00:24:31.638 00:38:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:31.638 00:38:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:31.638 00:38:25 -- common/autotest_common.sh@875 -- # return 0 00:24:31.638 00:38:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:31.638 00:38:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:31.638 00:38:25 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:24:31.638 00:38:25 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:24:31.638 00:38:25 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:24:38.298 65536+0 records in 00:24:38.298 65536+0 records out 00:24:38.298 33554432 bytes (34 MB, 32 MiB) copied, 5.53605 s, 6.1 MB/s 00:24:38.298 00:38:30 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:38.298 00:38:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:38.298 00:38:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:38.298 00:38:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:38.298 00:38:30 -- bdev/nbd_common.sh@51 -- # local i 00:24:38.298 00:38:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:38.298 00:38:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:38.298 [2024-04-24 00:38:31.258147] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:38.298 00:38:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:38.298 00:38:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:38.298 00:38:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:38.298 00:38:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:38.298 00:38:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:38.298 00:38:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:38.298 00:38:31 -- bdev/nbd_common.sh@41 -- # break 00:24:38.298 00:38:31 -- bdev/nbd_common.sh@45 -- # return 0 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:38.298 [2024-04-24 00:38:31.533950] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:38.298 "name": "raid_bdev1", 00:24:38.298 "uuid": "fbe8f7a4-ae9c-4cc7-ba1a-4ce6eb823382", 00:24:38.298 "strip_size_kb": 0, 00:24:38.298 "state": "online", 00:24:38.298 "raid_level": "raid1", 00:24:38.298 "superblock": false, 00:24:38.298 "num_base_bdevs": 4, 00:24:38.298 "num_base_bdevs_discovered": 3, 00:24:38.298 "num_base_bdevs_operational": 3, 00:24:38.298 "base_bdevs_list": [ 00:24:38.298 { 00:24:38.298 "name": null, 00:24:38.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.298 "is_configured": false, 00:24:38.298 "data_offset": 0, 00:24:38.298 "data_size": 65536 00:24:38.298 }, 00:24:38.298 { 00:24:38.298 "name": "BaseBdev2", 00:24:38.298 "uuid": "1c3094e7-5ac4-4f15-9876-26180c58c23a", 00:24:38.298 "is_configured": true, 00:24:38.298 "data_offset": 0, 00:24:38.298 "data_size": 65536 00:24:38.298 }, 00:24:38.298 { 00:24:38.298 "name": "BaseBdev3", 00:24:38.298 "uuid": "6e83add2-d02b-48c6-b43f-e1f9193655ea", 00:24:38.298 "is_configured": true, 00:24:38.298 "data_offset": 0, 00:24:38.298 "data_size": 65536 00:24:38.298 }, 00:24:38.298 { 00:24:38.298 "name": "BaseBdev4", 00:24:38.298 "uuid": "46b31666-9732-462f-ab73-94286ce72842", 00:24:38.298 "is_configured": true, 00:24:38.298 "data_offset": 0, 00:24:38.298 "data_size": 65536 00:24:38.298 } 00:24:38.298 ] 00:24:38.298 }' 00:24:38.298 00:38:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:38.298 00:38:31 -- common/autotest_common.sh@10 -- # set +x 00:24:38.863 00:38:32 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:39.120 [2024-04-24 00:38:32.819325] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:39.120 [2024-04-24 00:38:32.819619] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:39.120 [2024-04-24 00:38:32.836648] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:24:39.120 [2024-04-24 00:38:32.839285] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:39.120 00:38:32 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:40.494 00:38:33 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:40.494 00:38:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:40.494 00:38:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:40.494 00:38:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:40.494 00:38:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:40.494 00:38:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.494 00:38:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.494 00:38:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:40.494 "name": "raid_bdev1", 00:24:40.494 "uuid": "fbe8f7a4-ae9c-4cc7-ba1a-4ce6eb823382", 00:24:40.494 "strip_size_kb": 0, 00:24:40.494 "state": "online", 00:24:40.494 "raid_level": "raid1", 00:24:40.494 "superblock": false, 00:24:40.494 "num_base_bdevs": 4, 00:24:40.494 "num_base_bdevs_discovered": 4, 00:24:40.494 "num_base_bdevs_operational": 4, 00:24:40.494 "process": { 00:24:40.494 "type": "rebuild", 00:24:40.494 "target": "spare", 00:24:40.494 "progress": { 00:24:40.494 "blocks": 24576, 00:24:40.494 "percent": 37 00:24:40.494 } 00:24:40.494 }, 00:24:40.494 "base_bdevs_list": [ 00:24:40.494 { 00:24:40.494 "name": "spare", 00:24:40.494 "uuid": "c8981e80-c81f-5ca3-9719-b410cf870011", 00:24:40.494 "is_configured": true, 00:24:40.494 "data_offset": 0, 00:24:40.494 "data_size": 65536 00:24:40.494 }, 00:24:40.494 { 00:24:40.494 "name": "BaseBdev2", 00:24:40.494 "uuid": "1c3094e7-5ac4-4f15-9876-26180c58c23a", 00:24:40.494 "is_configured": true, 00:24:40.495 "data_offset": 0, 00:24:40.495 "data_size": 65536 00:24:40.495 }, 00:24:40.495 { 00:24:40.495 "name": "BaseBdev3", 00:24:40.495 "uuid": "6e83add2-d02b-48c6-b43f-e1f9193655ea", 00:24:40.495 "is_configured": true, 00:24:40.495 "data_offset": 0, 00:24:40.495 "data_size": 65536 00:24:40.495 }, 00:24:40.495 { 00:24:40.495 "name": "BaseBdev4", 00:24:40.495 "uuid": "46b31666-9732-462f-ab73-94286ce72842", 00:24:40.495 "is_configured": true, 00:24:40.495 "data_offset": 0, 00:24:40.495 "data_size": 65536 00:24:40.495 } 00:24:40.495 ] 00:24:40.495 }' 00:24:40.495 00:38:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:40.495 00:38:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:40.495 00:38:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:40.495 00:38:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:40.495 00:38:34 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:40.753 [2024-04-24 00:38:34.365308] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:40.753 [2024-04-24 00:38:34.449859] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:40.753 [2024-04-24 00:38:34.450222] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.753 00:38:34 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:40.753 00:38:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:40.753 00:38:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:40.753 00:38:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:40.753 00:38:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:40.753 00:38:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:40.753 00:38:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:40.753 00:38:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:40.753 00:38:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:40.754 00:38:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:40.754 00:38:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.754 00:38:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.013 00:38:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:41.013 "name": "raid_bdev1", 00:24:41.013 "uuid": "fbe8f7a4-ae9c-4cc7-ba1a-4ce6eb823382", 00:24:41.013 "strip_size_kb": 0, 00:24:41.013 "state": "online", 00:24:41.013 "raid_level": "raid1", 00:24:41.013 "superblock": false, 00:24:41.013 "num_base_bdevs": 4, 00:24:41.013 "num_base_bdevs_discovered": 3, 00:24:41.013 "num_base_bdevs_operational": 3, 00:24:41.013 "base_bdevs_list": [ 00:24:41.013 { 00:24:41.013 "name": null, 00:24:41.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.013 "is_configured": false, 00:24:41.013 "data_offset": 0, 00:24:41.013 "data_size": 65536 00:24:41.013 }, 00:24:41.013 { 00:24:41.013 "name": "BaseBdev2", 00:24:41.013 "uuid": "1c3094e7-5ac4-4f15-9876-26180c58c23a", 00:24:41.013 "is_configured": true, 00:24:41.013 "data_offset": 0, 00:24:41.013 "data_size": 65536 00:24:41.013 }, 00:24:41.013 { 00:24:41.013 "name": "BaseBdev3", 00:24:41.013 "uuid": "6e83add2-d02b-48c6-b43f-e1f9193655ea", 00:24:41.013 "is_configured": true, 00:24:41.013 "data_offset": 0, 00:24:41.013 "data_size": 65536 00:24:41.013 }, 00:24:41.013 { 00:24:41.013 "name": "BaseBdev4", 00:24:41.013 "uuid": "46b31666-9732-462f-ab73-94286ce72842", 00:24:41.013 "is_configured": true, 00:24:41.013 "data_offset": 0, 00:24:41.013 "data_size": 65536 00:24:41.013 } 00:24:41.013 ] 00:24:41.013 }' 00:24:41.013 00:38:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:41.013 00:38:34 -- common/autotest_common.sh@10 -- # set +x 00:24:41.580 00:38:35 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:41.580 00:38:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:41.580 00:38:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:41.580 00:38:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:41.580 00:38:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:41.580 00:38:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.580 00:38:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.840 00:38:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:41.840 "name": "raid_bdev1", 00:24:41.840 "uuid": "fbe8f7a4-ae9c-4cc7-ba1a-4ce6eb823382", 00:24:41.840 "strip_size_kb": 0, 00:24:41.840 "state": "online", 00:24:41.840 "raid_level": "raid1", 00:24:41.840 "superblock": false, 00:24:41.840 "num_base_bdevs": 4, 00:24:41.840 "num_base_bdevs_discovered": 3, 00:24:41.840 "num_base_bdevs_operational": 3, 00:24:41.840 "base_bdevs_list": [ 00:24:41.840 { 00:24:41.840 "name": null, 00:24:41.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.840 "is_configured": false, 00:24:41.840 "data_offset": 0, 00:24:41.840 "data_size": 65536 00:24:41.840 }, 00:24:41.840 { 00:24:41.840 "name": "BaseBdev2", 00:24:41.840 "uuid": "1c3094e7-5ac4-4f15-9876-26180c58c23a", 00:24:41.840 "is_configured": true, 00:24:41.840 "data_offset": 0, 00:24:41.840 "data_size": 65536 00:24:41.840 }, 00:24:41.840 { 00:24:41.840 "name": "BaseBdev3", 00:24:41.840 "uuid": "6e83add2-d02b-48c6-b43f-e1f9193655ea", 00:24:41.840 "is_configured": true, 00:24:41.840 "data_offset": 0, 00:24:41.840 "data_size": 65536 00:24:41.840 }, 00:24:41.840 { 00:24:41.840 "name": "BaseBdev4", 00:24:41.840 "uuid": "46b31666-9732-462f-ab73-94286ce72842", 00:24:41.840 "is_configured": true, 00:24:41.840 "data_offset": 0, 00:24:41.840 "data_size": 65536 00:24:41.840 } 00:24:41.840 ] 00:24:41.840 }' 00:24:41.840 00:38:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:42.099 00:38:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:42.099 00:38:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:42.099 00:38:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:42.099 00:38:35 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:42.099 [2024-04-24 00:38:35.888287] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:42.099 [2024-04-24 00:38:35.888549] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:42.357 [2024-04-24 00:38:35.902704] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09890 00:24:42.357 [2024-04-24 00:38:35.904924] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:42.357 00:38:35 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:43.291 00:38:36 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.291 00:38:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:43.291 00:38:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:43.291 00:38:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:43.291 00:38:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:43.291 00:38:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.291 00:38:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.550 00:38:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:43.550 "name": "raid_bdev1", 00:24:43.550 "uuid": "fbe8f7a4-ae9c-4cc7-ba1a-4ce6eb823382", 00:24:43.550 "strip_size_kb": 0, 00:24:43.550 "state": "online", 00:24:43.550 "raid_level": "raid1", 00:24:43.550 "superblock": false, 00:24:43.550 "num_base_bdevs": 4, 00:24:43.550 "num_base_bdevs_discovered": 4, 00:24:43.550 "num_base_bdevs_operational": 4, 00:24:43.550 "process": { 00:24:43.550 "type": "rebuild", 00:24:43.550 "target": "spare", 00:24:43.550 "progress": { 00:24:43.550 "blocks": 24576, 00:24:43.550 "percent": 37 00:24:43.550 } 00:24:43.550 }, 00:24:43.550 "base_bdevs_list": [ 00:24:43.550 { 00:24:43.550 "name": "spare", 00:24:43.550 "uuid": "c8981e80-c81f-5ca3-9719-b410cf870011", 00:24:43.550 "is_configured": true, 00:24:43.550 "data_offset": 0, 00:24:43.550 "data_size": 65536 00:24:43.550 }, 00:24:43.550 { 00:24:43.550 "name": "BaseBdev2", 00:24:43.550 "uuid": "1c3094e7-5ac4-4f15-9876-26180c58c23a", 00:24:43.550 "is_configured": true, 00:24:43.550 "data_offset": 0, 00:24:43.550 "data_size": 65536 00:24:43.550 }, 00:24:43.550 { 00:24:43.550 "name": "BaseBdev3", 00:24:43.550 "uuid": "6e83add2-d02b-48c6-b43f-e1f9193655ea", 00:24:43.550 "is_configured": true, 00:24:43.550 "data_offset": 0, 00:24:43.550 "data_size": 65536 00:24:43.550 }, 00:24:43.550 { 00:24:43.550 "name": "BaseBdev4", 00:24:43.550 "uuid": "46b31666-9732-462f-ab73-94286ce72842", 00:24:43.550 "is_configured": true, 00:24:43.550 "data_offset": 0, 00:24:43.550 "data_size": 65536 00:24:43.550 } 00:24:43.550 ] 00:24:43.550 }' 00:24:43.550 00:38:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:43.550 00:38:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.550 00:38:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:43.550 00:38:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.550 00:38:37 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:24:43.550 00:38:37 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:43.550 00:38:37 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:24:43.550 00:38:37 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:24:43.550 00:38:37 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:43.808 [2024-04-24 00:38:37.527565] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:44.066 [2024-04-24 00:38:37.615550] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09890 00:24:44.066 00:38:37 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:24:44.066 00:38:37 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:24:44.066 00:38:37 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.066 00:38:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:44.066 00:38:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:44.066 00:38:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:44.066 00:38:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:44.066 00:38:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.066 00:38:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.323 00:38:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:44.323 "name": "raid_bdev1", 00:24:44.323 "uuid": "fbe8f7a4-ae9c-4cc7-ba1a-4ce6eb823382", 00:24:44.323 "strip_size_kb": 0, 00:24:44.323 "state": "online", 00:24:44.323 "raid_level": "raid1", 00:24:44.323 "superblock": false, 00:24:44.323 "num_base_bdevs": 4, 00:24:44.323 "num_base_bdevs_discovered": 3, 00:24:44.323 "num_base_bdevs_operational": 3, 00:24:44.323 "process": { 00:24:44.323 "type": "rebuild", 00:24:44.323 "target": "spare", 00:24:44.323 "progress": { 00:24:44.323 "blocks": 40960, 00:24:44.323 "percent": 62 00:24:44.323 } 00:24:44.323 }, 00:24:44.323 "base_bdevs_list": [ 00:24:44.323 { 00:24:44.323 "name": "spare", 00:24:44.323 "uuid": "c8981e80-c81f-5ca3-9719-b410cf870011", 00:24:44.323 "is_configured": true, 00:24:44.323 "data_offset": 0, 00:24:44.323 "data_size": 65536 00:24:44.323 }, 00:24:44.323 { 00:24:44.323 "name": null, 00:24:44.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.323 "is_configured": false, 00:24:44.323 "data_offset": 0, 00:24:44.323 "data_size": 65536 00:24:44.323 }, 00:24:44.323 { 00:24:44.323 "name": "BaseBdev3", 00:24:44.323 "uuid": "6e83add2-d02b-48c6-b43f-e1f9193655ea", 00:24:44.323 "is_configured": true, 00:24:44.323 "data_offset": 0, 00:24:44.323 "data_size": 65536 00:24:44.323 }, 00:24:44.323 { 00:24:44.323 "name": "BaseBdev4", 00:24:44.323 "uuid": "46b31666-9732-462f-ab73-94286ce72842", 00:24:44.323 "is_configured": true, 00:24:44.323 "data_offset": 0, 00:24:44.323 "data_size": 65536 00:24:44.323 } 00:24:44.323 ] 00:24:44.323 }' 00:24:44.323 00:38:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:44.323 00:38:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:44.323 00:38:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:44.323 00:38:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:44.323 00:38:38 -- bdev/bdev_raid.sh@657 -- # local timeout=541 00:24:44.323 00:38:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:44.323 00:38:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.323 00:38:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:44.323 00:38:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:44.323 00:38:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:44.323 00:38:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:44.323 00:38:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.323 00:38:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.581 00:38:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:44.581 "name": "raid_bdev1", 00:24:44.581 "uuid": "fbe8f7a4-ae9c-4cc7-ba1a-4ce6eb823382", 00:24:44.581 "strip_size_kb": 0, 00:24:44.581 "state": "online", 00:24:44.581 "raid_level": "raid1", 00:24:44.581 "superblock": false, 00:24:44.581 "num_base_bdevs": 4, 00:24:44.581 "num_base_bdevs_discovered": 3, 00:24:44.581 "num_base_bdevs_operational": 3, 00:24:44.581 "process": { 00:24:44.581 "type": "rebuild", 00:24:44.581 "target": "spare", 00:24:44.581 "progress": { 00:24:44.581 "blocks": 47104, 00:24:44.581 "percent": 71 00:24:44.581 } 00:24:44.581 }, 00:24:44.581 "base_bdevs_list": [ 00:24:44.581 { 00:24:44.581 "name": "spare", 00:24:44.581 "uuid": "c8981e80-c81f-5ca3-9719-b410cf870011", 00:24:44.581 "is_configured": true, 00:24:44.581 "data_offset": 0, 00:24:44.581 "data_size": 65536 00:24:44.581 }, 00:24:44.581 { 00:24:44.581 "name": null, 00:24:44.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.581 "is_configured": false, 00:24:44.581 "data_offset": 0, 00:24:44.581 "data_size": 65536 00:24:44.581 }, 00:24:44.581 { 00:24:44.581 "name": "BaseBdev3", 00:24:44.581 "uuid": "6e83add2-d02b-48c6-b43f-e1f9193655ea", 00:24:44.581 "is_configured": true, 00:24:44.581 "data_offset": 0, 00:24:44.581 "data_size": 65536 00:24:44.581 }, 00:24:44.581 { 00:24:44.581 "name": "BaseBdev4", 00:24:44.581 "uuid": "46b31666-9732-462f-ab73-94286ce72842", 00:24:44.581 "is_configured": true, 00:24:44.581 "data_offset": 0, 00:24:44.581 "data_size": 65536 00:24:44.581 } 00:24:44.581 ] 00:24:44.581 }' 00:24:44.581 00:38:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:44.840 00:38:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:44.840 00:38:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:44.840 00:38:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:44.840 00:38:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:45.474 [2024-04-24 00:38:39.125070] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:45.474 [2024-04-24 00:38:39.125363] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:45.474 [2024-04-24 00:38:39.125540] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.733 00:38:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:45.733 00:38:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:45.733 00:38:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:45.733 00:38:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:45.733 00:38:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:45.733 00:38:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:45.733 00:38:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.733 00:38:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.991 00:38:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:45.991 "name": "raid_bdev1", 00:24:45.991 "uuid": "fbe8f7a4-ae9c-4cc7-ba1a-4ce6eb823382", 00:24:45.991 "strip_size_kb": 0, 00:24:45.991 "state": "online", 00:24:45.991 "raid_level": "raid1", 00:24:45.991 "superblock": false, 00:24:45.991 "num_base_bdevs": 4, 00:24:45.991 "num_base_bdevs_discovered": 3, 00:24:45.991 "num_base_bdevs_operational": 3, 00:24:45.991 "base_bdevs_list": [ 00:24:45.991 { 00:24:45.991 "name": "spare", 00:24:45.991 "uuid": "c8981e80-c81f-5ca3-9719-b410cf870011", 00:24:45.991 "is_configured": true, 00:24:45.991 "data_offset": 0, 00:24:45.991 "data_size": 65536 00:24:45.991 }, 00:24:45.991 { 00:24:45.992 "name": null, 00:24:45.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.992 "is_configured": false, 00:24:45.992 "data_offset": 0, 00:24:45.992 "data_size": 65536 00:24:45.992 }, 00:24:45.992 { 00:24:45.992 "name": "BaseBdev3", 00:24:45.992 "uuid": "6e83add2-d02b-48c6-b43f-e1f9193655ea", 00:24:45.992 "is_configured": true, 00:24:45.992 "data_offset": 0, 00:24:45.992 "data_size": 65536 00:24:45.992 }, 00:24:45.992 { 00:24:45.992 "name": "BaseBdev4", 00:24:45.992 "uuid": "46b31666-9732-462f-ab73-94286ce72842", 00:24:45.992 "is_configured": true, 00:24:45.992 "data_offset": 0, 00:24:45.992 "data_size": 65536 00:24:45.992 } 00:24:45.992 ] 00:24:45.992 }' 00:24:45.992 00:38:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:45.992 00:38:39 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:45.992 00:38:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:45.992 00:38:39 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:45.992 00:38:39 -- bdev/bdev_raid.sh@660 -- # break 00:24:45.992 00:38:39 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:45.992 00:38:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:45.992 00:38:39 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:45.992 00:38:39 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:45.992 00:38:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:46.250 00:38:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.250 00:38:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.250 00:38:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:46.250 "name": "raid_bdev1", 00:24:46.250 "uuid": "fbe8f7a4-ae9c-4cc7-ba1a-4ce6eb823382", 00:24:46.250 "strip_size_kb": 0, 00:24:46.250 "state": "online", 00:24:46.250 "raid_level": "raid1", 00:24:46.250 "superblock": false, 00:24:46.250 "num_base_bdevs": 4, 00:24:46.250 "num_base_bdevs_discovered": 3, 00:24:46.250 "num_base_bdevs_operational": 3, 00:24:46.250 "base_bdevs_list": [ 00:24:46.250 { 00:24:46.250 "name": "spare", 00:24:46.250 "uuid": "c8981e80-c81f-5ca3-9719-b410cf870011", 00:24:46.250 "is_configured": true, 00:24:46.250 "data_offset": 0, 00:24:46.250 "data_size": 65536 00:24:46.250 }, 00:24:46.250 { 00:24:46.250 "name": null, 00:24:46.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.250 "is_configured": false, 00:24:46.250 "data_offset": 0, 00:24:46.250 "data_size": 65536 00:24:46.250 }, 00:24:46.250 { 00:24:46.250 "name": "BaseBdev3", 00:24:46.250 "uuid": "6e83add2-d02b-48c6-b43f-e1f9193655ea", 00:24:46.250 "is_configured": true, 00:24:46.250 "data_offset": 0, 00:24:46.250 "data_size": 65536 00:24:46.250 }, 00:24:46.250 { 00:24:46.250 "name": "BaseBdev4", 00:24:46.251 "uuid": "46b31666-9732-462f-ab73-94286ce72842", 00:24:46.251 "is_configured": true, 00:24:46.251 "data_offset": 0, 00:24:46.251 "data_size": 65536 00:24:46.251 } 00:24:46.251 ] 00:24:46.251 }' 00:24:46.251 00:38:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:46.251 00:38:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:46.251 00:38:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:46.509 00:38:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:46.509 00:38:40 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:46.509 00:38:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:46.509 00:38:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:46.509 00:38:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:46.509 00:38:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:46.509 00:38:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:46.509 00:38:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:46.509 00:38:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:46.509 00:38:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:46.509 00:38:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:46.509 00:38:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.509 00:38:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.767 00:38:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:46.767 "name": "raid_bdev1", 00:24:46.767 "uuid": "fbe8f7a4-ae9c-4cc7-ba1a-4ce6eb823382", 00:24:46.767 "strip_size_kb": 0, 00:24:46.767 "state": "online", 00:24:46.767 "raid_level": "raid1", 00:24:46.767 "superblock": false, 00:24:46.767 "num_base_bdevs": 4, 00:24:46.767 "num_base_bdevs_discovered": 3, 00:24:46.767 "num_base_bdevs_operational": 3, 00:24:46.767 "base_bdevs_list": [ 00:24:46.767 { 00:24:46.767 "name": "spare", 00:24:46.767 "uuid": "c8981e80-c81f-5ca3-9719-b410cf870011", 00:24:46.767 "is_configured": true, 00:24:46.767 "data_offset": 0, 00:24:46.767 "data_size": 65536 00:24:46.767 }, 00:24:46.767 { 00:24:46.767 "name": null, 00:24:46.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:46.767 "is_configured": false, 00:24:46.767 "data_offset": 0, 00:24:46.767 "data_size": 65536 00:24:46.767 }, 00:24:46.767 { 00:24:46.767 "name": "BaseBdev3", 00:24:46.767 "uuid": "6e83add2-d02b-48c6-b43f-e1f9193655ea", 00:24:46.767 "is_configured": true, 00:24:46.767 "data_offset": 0, 00:24:46.767 "data_size": 65536 00:24:46.767 }, 00:24:46.767 { 00:24:46.767 "name": "BaseBdev4", 00:24:46.767 "uuid": "46b31666-9732-462f-ab73-94286ce72842", 00:24:46.767 "is_configured": true, 00:24:46.767 "data_offset": 0, 00:24:46.767 "data_size": 65536 00:24:46.767 } 00:24:46.767 ] 00:24:46.767 }' 00:24:46.767 00:38:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:46.767 00:38:40 -- common/autotest_common.sh@10 -- # set +x 00:24:47.363 00:38:41 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:47.622 [2024-04-24 00:38:41.368396] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:47.622 [2024-04-24 00:38:41.368616] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:47.622 [2024-04-24 00:38:41.368826] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:47.622 [2024-04-24 00:38:41.368992] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:47.622 [2024-04-24 00:38:41.369107] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:24:47.622 00:38:41 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:47.622 00:38:41 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.880 00:38:41 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:47.880 00:38:41 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:47.880 00:38:41 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:47.880 00:38:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:47.880 00:38:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:47.880 00:38:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:47.880 00:38:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:47.880 00:38:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:47.880 00:38:41 -- bdev/nbd_common.sh@12 -- # local i 00:24:47.880 00:38:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:47.880 00:38:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:47.880 00:38:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:48.137 /dev/nbd0 00:24:48.395 00:38:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:48.395 00:38:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:48.395 00:38:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:48.395 00:38:41 -- common/autotest_common.sh@855 -- # local i 00:24:48.395 00:38:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:48.395 00:38:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:48.395 00:38:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:48.395 00:38:41 -- common/autotest_common.sh@859 -- # break 00:24:48.395 00:38:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:48.395 00:38:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:48.395 00:38:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:48.395 1+0 records in 00:24:48.395 1+0 records out 00:24:48.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365444 s, 11.2 MB/s 00:24:48.395 00:38:41 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.395 00:38:41 -- common/autotest_common.sh@872 -- # size=4096 00:24:48.395 00:38:41 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.395 00:38:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:48.395 00:38:41 -- common/autotest_common.sh@875 -- # return 0 00:24:48.395 00:38:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:48.395 00:38:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:48.395 00:38:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:48.654 /dev/nbd1 00:24:48.654 00:38:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:48.654 00:38:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:48.654 00:38:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:24:48.654 00:38:42 -- common/autotest_common.sh@855 -- # local i 00:24:48.654 00:38:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:48.654 00:38:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:48.654 00:38:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:24:48.654 00:38:42 -- common/autotest_common.sh@859 -- # break 00:24:48.654 00:38:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:48.654 00:38:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:48.654 00:38:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:48.654 1+0 records in 00:24:48.654 1+0 records out 00:24:48.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755152 s, 5.4 MB/s 00:24:48.654 00:38:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.654 00:38:42 -- common/autotest_common.sh@872 -- # size=4096 00:24:48.654 00:38:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.654 00:38:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:48.654 00:38:42 -- common/autotest_common.sh@875 -- # return 0 00:24:48.654 00:38:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:48.654 00:38:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:48.654 00:38:42 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:48.912 00:38:42 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:48.912 00:38:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:48.912 00:38:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:48.912 00:38:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:48.912 00:38:42 -- bdev/nbd_common.sh@51 -- # local i 00:24:48.912 00:38:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:48.912 00:38:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:49.170 00:38:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:49.170 00:38:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:49.170 00:38:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:49.170 00:38:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:49.170 00:38:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:49.170 00:38:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:49.170 00:38:42 -- bdev/nbd_common.sh@41 -- # break 00:24:49.170 00:38:42 -- bdev/nbd_common.sh@45 -- # return 0 00:24:49.170 00:38:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:49.170 00:38:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:49.170 00:38:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:49.427 00:38:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:49.427 00:38:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:49.427 00:38:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:49.427 00:38:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:49.428 00:38:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:49.428 00:38:42 -- bdev/nbd_common.sh@41 -- # break 00:24:49.428 00:38:42 -- bdev/nbd_common.sh@45 -- # return 0 00:24:49.428 00:38:42 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:49.428 00:38:42 -- bdev/bdev_raid.sh@709 -- # killprocess 133517 00:24:49.428 00:38:42 -- common/autotest_common.sh@936 -- # '[' -z 133517 ']' 00:24:49.428 00:38:42 -- common/autotest_common.sh@940 -- # kill -0 133517 00:24:49.428 00:38:42 -- common/autotest_common.sh@941 -- # uname 00:24:49.428 00:38:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:49.428 00:38:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133517 00:24:49.428 killing process with pid 133517 00:24:49.428 Received shutdown signal, test time was about 60.000000 seconds 00:24:49.428 00:24:49.428 Latency(us) 00:24:49.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.428 =================================================================================================================== 00:24:49.428 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:49.428 00:38:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:49.428 00:38:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:49.428 00:38:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133517' 00:24:49.428 00:38:42 -- common/autotest_common.sh@955 -- # kill 133517 00:24:49.428 00:38:42 -- common/autotest_common.sh@960 -- # wait 133517 00:24:49.428 [2024-04-24 00:38:42.998716] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:49.993 [2024-04-24 00:38:43.530912] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:51.363 ************************************ 00:24:51.363 END TEST raid_rebuild_test 00:24:51.363 ************************************ 00:24:51.363 00:38:44 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:51.363 00:24:51.363 real 0m24.760s 00:24:51.363 user 0m33.842s 00:24:51.363 sys 0m4.396s 00:24:51.363 00:38:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:51.363 00:38:44 -- common/autotest_common.sh@10 -- # set +x 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:24:51.363 00:38:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:24:51.363 00:38:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:51.363 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:24:51.363 ************************************ 00:24:51.363 START TEST raid_rebuild_test_sb 00:24:51.363 ************************************ 00:24:51.363 00:38:45 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 true false 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@544 -- # raid_pid=134084 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134084 /var/tmp/spdk-raid.sock 00:24:51.363 00:38:45 -- common/autotest_common.sh@817 -- # '[' -z 134084 ']' 00:24:51.363 00:38:45 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:51.363 00:38:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:51.363 00:38:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:51.363 00:38:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:51.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:51.363 00:38:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:51.363 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:24:51.363 [2024-04-24 00:38:45.149991] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:24:51.363 [2024-04-24 00:38:45.150489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134084 ] 00:24:51.363 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:51.363 Zero copy mechanism will not be used. 00:24:51.621 [2024-04-24 00:38:45.331240] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.878 [2024-04-24 00:38:45.589335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.136 [2024-04-24 00:38:45.843148] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:52.393 00:38:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:52.394 00:38:46 -- common/autotest_common.sh@850 -- # return 0 00:24:52.394 00:38:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:52.394 00:38:46 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:52.394 00:38:46 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:52.651 BaseBdev1_malloc 00:24:52.651 00:38:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:52.909 [2024-04-24 00:38:46.633314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:52.909 [2024-04-24 00:38:46.633650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.909 [2024-04-24 00:38:46.633830] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:24:52.909 [2024-04-24 00:38:46.633994] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.909 [2024-04-24 00:38:46.637484] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.909 [2024-04-24 00:38:46.637713] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:52.909 BaseBdev1 00:24:52.909 00:38:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:52.909 00:38:46 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:52.909 00:38:46 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:53.167 BaseBdev2_malloc 00:24:53.167 00:38:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:53.425 [2024-04-24 00:38:47.204716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:53.425 [2024-04-24 00:38:47.205026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.425 [2024-04-24 00:38:47.205223] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:53.425 [2024-04-24 00:38:47.205453] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.425 [2024-04-24 00:38:47.208212] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.425 [2024-04-24 00:38:47.208395] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:53.425 BaseBdev2 00:24:53.683 00:38:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:53.683 00:38:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:53.683 00:38:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:53.940 BaseBdev3_malloc 00:24:53.940 00:38:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:54.198 [2024-04-24 00:38:47.735823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:54.198 [2024-04-24 00:38:47.736120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.198 [2024-04-24 00:38:47.736322] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:54.198 [2024-04-24 00:38:47.736517] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.198 [2024-04-24 00:38:47.739300] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.198 [2024-04-24 00:38:47.739477] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:54.198 BaseBdev3 00:24:54.198 00:38:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:54.198 00:38:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:54.198 00:38:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:54.456 BaseBdev4_malloc 00:24:54.456 00:38:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:54.714 [2024-04-24 00:38:48.360245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:54.714 [2024-04-24 00:38:48.360550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.714 [2024-04-24 00:38:48.360755] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:24:54.714 [2024-04-24 00:38:48.360932] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.714 [2024-04-24 00:38:48.363646] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.714 [2024-04-24 00:38:48.363820] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:54.714 BaseBdev4 00:24:54.714 00:38:48 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:54.971 spare_malloc 00:24:54.971 00:38:48 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:55.229 spare_delay 00:24:55.229 00:38:48 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:55.512 [2024-04-24 00:38:49.230417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:55.512 [2024-04-24 00:38:49.230679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.512 [2024-04-24 00:38:49.230755] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:55.513 [2024-04-24 00:38:49.230873] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.513 [2024-04-24 00:38:49.233433] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.513 [2024-04-24 00:38:49.233612] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:55.513 spare 00:24:55.513 00:38:49 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:55.770 [2024-04-24 00:38:49.502623] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:55.770 [2024-04-24 00:38:49.505007] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:55.770 [2024-04-24 00:38:49.505216] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:55.770 [2024-04-24 00:38:49.505397] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:55.770 [2024-04-24 00:38:49.505644] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:24:55.770 [2024-04-24 00:38:49.505687] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:55.770 [2024-04-24 00:38:49.505907] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:24:55.770 [2024-04-24 00:38:49.506403] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:24:55.770 [2024-04-24 00:38:49.506522] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:24:55.770 [2024-04-24 00:38:49.506824] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:55.770 00:38:49 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:55.770 00:38:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:55.770 00:38:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:55.770 00:38:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:55.770 00:38:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:55.770 00:38:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:55.770 00:38:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:55.770 00:38:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:55.770 00:38:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:55.770 00:38:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:55.770 00:38:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.770 00:38:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.028 00:38:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:56.028 "name": "raid_bdev1", 00:24:56.028 "uuid": "8a3dd883-630d-4360-abe3-a1648a9ca9fe", 00:24:56.028 "strip_size_kb": 0, 00:24:56.028 "state": "online", 00:24:56.028 "raid_level": "raid1", 00:24:56.028 "superblock": true, 00:24:56.028 "num_base_bdevs": 4, 00:24:56.028 "num_base_bdevs_discovered": 4, 00:24:56.028 "num_base_bdevs_operational": 4, 00:24:56.028 "base_bdevs_list": [ 00:24:56.028 { 00:24:56.028 "name": "BaseBdev1", 00:24:56.028 "uuid": "ee7aa5b6-1a2d-5b08-89af-697628ee3f95", 00:24:56.028 "is_configured": true, 00:24:56.028 "data_offset": 2048, 00:24:56.028 "data_size": 63488 00:24:56.028 }, 00:24:56.028 { 00:24:56.028 "name": "BaseBdev2", 00:24:56.028 "uuid": "5558dc71-97f2-5e44-aa5f-5981b28764fd", 00:24:56.028 "is_configured": true, 00:24:56.028 "data_offset": 2048, 00:24:56.028 "data_size": 63488 00:24:56.028 }, 00:24:56.028 { 00:24:56.028 "name": "BaseBdev3", 00:24:56.028 "uuid": "295254ab-4b9c-5d5a-8c93-bc81190c8300", 00:24:56.028 "is_configured": true, 00:24:56.028 "data_offset": 2048, 00:24:56.028 "data_size": 63488 00:24:56.028 }, 00:24:56.028 { 00:24:56.028 "name": "BaseBdev4", 00:24:56.028 "uuid": "ece2f03e-f2bc-5d2a-9140-01237e168514", 00:24:56.028 "is_configured": true, 00:24:56.028 "data_offset": 2048, 00:24:56.028 "data_size": 63488 00:24:56.028 } 00:24:56.028 ] 00:24:56.028 }' 00:24:56.028 00:38:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:56.028 00:38:49 -- common/autotest_common.sh@10 -- # set +x 00:24:56.594 00:38:50 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:56.594 00:38:50 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:56.852 [2024-04-24 00:38:50.643403] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:57.110 00:38:50 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:24:57.110 00:38:50 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.110 00:38:50 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:57.369 00:38:50 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:57.369 00:38:50 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:57.369 00:38:50 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:57.369 00:38:50 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:57.369 00:38:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:57.369 00:38:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:57.369 00:38:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:57.369 00:38:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:57.369 00:38:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:57.369 00:38:50 -- bdev/nbd_common.sh@12 -- # local i 00:24:57.369 00:38:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:57.369 00:38:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:57.369 00:38:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:57.640 [2024-04-24 00:38:51.215263] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:57.640 /dev/nbd0 00:24:57.640 00:38:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:57.640 00:38:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:57.640 00:38:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:57.640 00:38:51 -- common/autotest_common.sh@855 -- # local i 00:24:57.640 00:38:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:57.640 00:38:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:57.640 00:38:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:57.640 00:38:51 -- common/autotest_common.sh@859 -- # break 00:24:57.640 00:38:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:57.640 00:38:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:57.640 00:38:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:57.640 1+0 records in 00:24:57.640 1+0 records out 00:24:57.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049435 s, 8.3 MB/s 00:24:57.640 00:38:51 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:57.640 00:38:51 -- common/autotest_common.sh@872 -- # size=4096 00:24:57.640 00:38:51 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:57.640 00:38:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:57.640 00:38:51 -- common/autotest_common.sh@875 -- # return 0 00:24:57.640 00:38:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:57.640 00:38:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:57.640 00:38:51 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:24:57.640 00:38:51 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:24:57.640 00:38:51 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:25:04.217 63488+0 records in 00:25:04.217 63488+0 records out 00:25:04.217 32505856 bytes (33 MB, 31 MiB) copied, 6.31482 s, 5.1 MB/s 00:25:04.217 00:38:57 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@51 -- # local i 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:04.217 [2024-04-24 00:38:57.961529] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@41 -- # break 00:25:04.217 00:38:57 -- bdev/nbd_common.sh@45 -- # return 0 00:25:04.217 00:38:57 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:04.475 [2024-04-24 00:38:58.177322] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:04.475 00:38:58 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:04.475 00:38:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:04.475 00:38:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:04.475 00:38:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:04.475 00:38:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:04.475 00:38:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:04.475 00:38:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:04.475 00:38:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:04.475 00:38:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:04.475 00:38:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:04.475 00:38:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.475 00:38:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.733 00:38:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:04.733 "name": "raid_bdev1", 00:25:04.733 "uuid": "8a3dd883-630d-4360-abe3-a1648a9ca9fe", 00:25:04.733 "strip_size_kb": 0, 00:25:04.733 "state": "online", 00:25:04.733 "raid_level": "raid1", 00:25:04.733 "superblock": true, 00:25:04.733 "num_base_bdevs": 4, 00:25:04.733 "num_base_bdevs_discovered": 3, 00:25:04.733 "num_base_bdevs_operational": 3, 00:25:04.733 "base_bdevs_list": [ 00:25:04.733 { 00:25:04.733 "name": null, 00:25:04.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.733 "is_configured": false, 00:25:04.733 "data_offset": 2048, 00:25:04.733 "data_size": 63488 00:25:04.733 }, 00:25:04.733 { 00:25:04.733 "name": "BaseBdev2", 00:25:04.733 "uuid": "5558dc71-97f2-5e44-aa5f-5981b28764fd", 00:25:04.733 "is_configured": true, 00:25:04.733 "data_offset": 2048, 00:25:04.733 "data_size": 63488 00:25:04.733 }, 00:25:04.733 { 00:25:04.733 "name": "BaseBdev3", 00:25:04.733 "uuid": "295254ab-4b9c-5d5a-8c93-bc81190c8300", 00:25:04.733 "is_configured": true, 00:25:04.733 "data_offset": 2048, 00:25:04.733 "data_size": 63488 00:25:04.733 }, 00:25:04.733 { 00:25:04.733 "name": "BaseBdev4", 00:25:04.733 "uuid": "ece2f03e-f2bc-5d2a-9140-01237e168514", 00:25:04.733 "is_configured": true, 00:25:04.733 "data_offset": 2048, 00:25:04.733 "data_size": 63488 00:25:04.733 } 00:25:04.733 ] 00:25:04.733 }' 00:25:04.733 00:38:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:04.733 00:38:58 -- common/autotest_common.sh@10 -- # set +x 00:25:05.666 00:38:59 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:05.666 [2024-04-24 00:38:59.377572] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:05.666 [2024-04-24 00:38:59.377802] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:05.666 [2024-04-24 00:38:59.394233] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:25:05.666 [2024-04-24 00:38:59.396718] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:05.666 00:38:59 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:07.038 00:39:00 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:07.038 00:39:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:07.038 00:39:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:07.038 00:39:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:07.038 00:39:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:07.038 00:39:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.038 00:39:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.038 00:39:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:07.038 "name": "raid_bdev1", 00:25:07.038 "uuid": "8a3dd883-630d-4360-abe3-a1648a9ca9fe", 00:25:07.038 "strip_size_kb": 0, 00:25:07.038 "state": "online", 00:25:07.038 "raid_level": "raid1", 00:25:07.038 "superblock": true, 00:25:07.039 "num_base_bdevs": 4, 00:25:07.039 "num_base_bdevs_discovered": 4, 00:25:07.039 "num_base_bdevs_operational": 4, 00:25:07.039 "process": { 00:25:07.039 "type": "rebuild", 00:25:07.039 "target": "spare", 00:25:07.039 "progress": { 00:25:07.039 "blocks": 24576, 00:25:07.039 "percent": 38 00:25:07.039 } 00:25:07.039 }, 00:25:07.039 "base_bdevs_list": [ 00:25:07.039 { 00:25:07.039 "name": "spare", 00:25:07.039 "uuid": "c772fb58-9fa2-56d8-9f27-8649a11f7a6d", 00:25:07.039 "is_configured": true, 00:25:07.039 "data_offset": 2048, 00:25:07.039 "data_size": 63488 00:25:07.039 }, 00:25:07.039 { 00:25:07.039 "name": "BaseBdev2", 00:25:07.039 "uuid": "5558dc71-97f2-5e44-aa5f-5981b28764fd", 00:25:07.039 "is_configured": true, 00:25:07.039 "data_offset": 2048, 00:25:07.039 "data_size": 63488 00:25:07.039 }, 00:25:07.039 { 00:25:07.039 "name": "BaseBdev3", 00:25:07.039 "uuid": "295254ab-4b9c-5d5a-8c93-bc81190c8300", 00:25:07.039 "is_configured": true, 00:25:07.039 "data_offset": 2048, 00:25:07.039 "data_size": 63488 00:25:07.039 }, 00:25:07.039 { 00:25:07.039 "name": "BaseBdev4", 00:25:07.039 "uuid": "ece2f03e-f2bc-5d2a-9140-01237e168514", 00:25:07.039 "is_configured": true, 00:25:07.039 "data_offset": 2048, 00:25:07.039 "data_size": 63488 00:25:07.039 } 00:25:07.039 ] 00:25:07.039 }' 00:25:07.039 00:39:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:07.039 00:39:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:07.039 00:39:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:07.039 00:39:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:07.039 00:39:00 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:07.296 [2024-04-24 00:39:01.011503] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:07.561 [2024-04-24 00:39:01.107557] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:07.561 [2024-04-24 00:39:01.107874] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.561 00:39:01 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:07.561 00:39:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:07.561 00:39:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:07.561 00:39:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:07.561 00:39:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:07.561 00:39:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:07.561 00:39:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:07.561 00:39:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:07.561 00:39:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:07.561 00:39:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:07.561 00:39:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.561 00:39:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.825 00:39:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:07.825 "name": "raid_bdev1", 00:25:07.825 "uuid": "8a3dd883-630d-4360-abe3-a1648a9ca9fe", 00:25:07.825 "strip_size_kb": 0, 00:25:07.825 "state": "online", 00:25:07.825 "raid_level": "raid1", 00:25:07.825 "superblock": true, 00:25:07.825 "num_base_bdevs": 4, 00:25:07.825 "num_base_bdevs_discovered": 3, 00:25:07.825 "num_base_bdevs_operational": 3, 00:25:07.825 "base_bdevs_list": [ 00:25:07.825 { 00:25:07.825 "name": null, 00:25:07.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.825 "is_configured": false, 00:25:07.825 "data_offset": 2048, 00:25:07.825 "data_size": 63488 00:25:07.825 }, 00:25:07.825 { 00:25:07.825 "name": "BaseBdev2", 00:25:07.825 "uuid": "5558dc71-97f2-5e44-aa5f-5981b28764fd", 00:25:07.825 "is_configured": true, 00:25:07.825 "data_offset": 2048, 00:25:07.825 "data_size": 63488 00:25:07.825 }, 00:25:07.825 { 00:25:07.825 "name": "BaseBdev3", 00:25:07.825 "uuid": "295254ab-4b9c-5d5a-8c93-bc81190c8300", 00:25:07.825 "is_configured": true, 00:25:07.825 "data_offset": 2048, 00:25:07.825 "data_size": 63488 00:25:07.825 }, 00:25:07.825 { 00:25:07.825 "name": "BaseBdev4", 00:25:07.825 "uuid": "ece2f03e-f2bc-5d2a-9140-01237e168514", 00:25:07.825 "is_configured": true, 00:25:07.825 "data_offset": 2048, 00:25:07.825 "data_size": 63488 00:25:07.825 } 00:25:07.825 ] 00:25:07.825 }' 00:25:07.825 00:39:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:07.825 00:39:01 -- common/autotest_common.sh@10 -- # set +x 00:25:08.390 00:39:01 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:08.390 00:39:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:08.390 00:39:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:08.390 00:39:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:08.390 00:39:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:08.390 00:39:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.390 00:39:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.390 00:39:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:08.390 "name": "raid_bdev1", 00:25:08.390 "uuid": "8a3dd883-630d-4360-abe3-a1648a9ca9fe", 00:25:08.390 "strip_size_kb": 0, 00:25:08.390 "state": "online", 00:25:08.390 "raid_level": "raid1", 00:25:08.390 "superblock": true, 00:25:08.390 "num_base_bdevs": 4, 00:25:08.390 "num_base_bdevs_discovered": 3, 00:25:08.390 "num_base_bdevs_operational": 3, 00:25:08.390 "base_bdevs_list": [ 00:25:08.390 { 00:25:08.390 "name": null, 00:25:08.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.390 "is_configured": false, 00:25:08.390 "data_offset": 2048, 00:25:08.390 "data_size": 63488 00:25:08.390 }, 00:25:08.390 { 00:25:08.390 "name": "BaseBdev2", 00:25:08.390 "uuid": "5558dc71-97f2-5e44-aa5f-5981b28764fd", 00:25:08.390 "is_configured": true, 00:25:08.390 "data_offset": 2048, 00:25:08.390 "data_size": 63488 00:25:08.390 }, 00:25:08.390 { 00:25:08.390 "name": "BaseBdev3", 00:25:08.390 "uuid": "295254ab-4b9c-5d5a-8c93-bc81190c8300", 00:25:08.390 "is_configured": true, 00:25:08.390 "data_offset": 2048, 00:25:08.390 "data_size": 63488 00:25:08.390 }, 00:25:08.390 { 00:25:08.390 "name": "BaseBdev4", 00:25:08.390 "uuid": "ece2f03e-f2bc-5d2a-9140-01237e168514", 00:25:08.390 "is_configured": true, 00:25:08.390 "data_offset": 2048, 00:25:08.390 "data_size": 63488 00:25:08.390 } 00:25:08.390 ] 00:25:08.390 }' 00:25:08.390 00:39:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:08.390 00:39:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:08.390 00:39:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:08.649 00:39:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:08.649 00:39:02 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:08.649 [2024-04-24 00:39:02.414344] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:08.649 [2024-04-24 00:39:02.414601] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:08.649 [2024-04-24 00:39:02.428776] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:25:08.649 [2024-04-24 00:39:02.430881] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:08.907 00:39:02 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:09.840 00:39:03 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:09.840 00:39:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:09.840 00:39:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:09.840 00:39:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:09.840 00:39:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:09.840 00:39:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.840 00:39:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.098 00:39:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:10.098 "name": "raid_bdev1", 00:25:10.098 "uuid": "8a3dd883-630d-4360-abe3-a1648a9ca9fe", 00:25:10.098 "strip_size_kb": 0, 00:25:10.098 "state": "online", 00:25:10.098 "raid_level": "raid1", 00:25:10.098 "superblock": true, 00:25:10.098 "num_base_bdevs": 4, 00:25:10.098 "num_base_bdevs_discovered": 4, 00:25:10.098 "num_base_bdevs_operational": 4, 00:25:10.098 "process": { 00:25:10.098 "type": "rebuild", 00:25:10.098 "target": "spare", 00:25:10.098 "progress": { 00:25:10.098 "blocks": 26624, 00:25:10.098 "percent": 41 00:25:10.098 } 00:25:10.098 }, 00:25:10.098 "base_bdevs_list": [ 00:25:10.098 { 00:25:10.098 "name": "spare", 00:25:10.098 "uuid": "c772fb58-9fa2-56d8-9f27-8649a11f7a6d", 00:25:10.098 "is_configured": true, 00:25:10.098 "data_offset": 2048, 00:25:10.098 "data_size": 63488 00:25:10.098 }, 00:25:10.098 { 00:25:10.098 "name": "BaseBdev2", 00:25:10.098 "uuid": "5558dc71-97f2-5e44-aa5f-5981b28764fd", 00:25:10.098 "is_configured": true, 00:25:10.098 "data_offset": 2048, 00:25:10.098 "data_size": 63488 00:25:10.098 }, 00:25:10.098 { 00:25:10.098 "name": "BaseBdev3", 00:25:10.098 "uuid": "295254ab-4b9c-5d5a-8c93-bc81190c8300", 00:25:10.098 "is_configured": true, 00:25:10.098 "data_offset": 2048, 00:25:10.098 "data_size": 63488 00:25:10.098 }, 00:25:10.098 { 00:25:10.098 "name": "BaseBdev4", 00:25:10.098 "uuid": "ece2f03e-f2bc-5d2a-9140-01237e168514", 00:25:10.098 "is_configured": true, 00:25:10.098 "data_offset": 2048, 00:25:10.098 "data_size": 63488 00:25:10.098 } 00:25:10.098 ] 00:25:10.098 }' 00:25:10.098 00:39:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:10.098 00:39:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:10.098 00:39:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:10.098 00:39:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:10.098 00:39:03 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:10.098 00:39:03 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:10.098 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:10.099 00:39:03 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:10.099 00:39:03 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:25:10.099 00:39:03 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:25:10.099 00:39:03 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:10.356 [2024-04-24 00:39:04.065720] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:10.356 [2024-04-24 00:39:04.141707] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3360 00:25:10.643 00:39:04 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:25:10.643 00:39:04 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:25:10.643 00:39:04 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:10.643 00:39:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:10.643 00:39:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:10.643 00:39:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:10.643 00:39:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:10.643 00:39:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.643 00:39:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.901 00:39:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:10.901 "name": "raid_bdev1", 00:25:10.901 "uuid": "8a3dd883-630d-4360-abe3-a1648a9ca9fe", 00:25:10.901 "strip_size_kb": 0, 00:25:10.901 "state": "online", 00:25:10.901 "raid_level": "raid1", 00:25:10.901 "superblock": true, 00:25:10.901 "num_base_bdevs": 4, 00:25:10.901 "num_base_bdevs_discovered": 3, 00:25:10.901 "num_base_bdevs_operational": 3, 00:25:10.901 "process": { 00:25:10.901 "type": "rebuild", 00:25:10.901 "target": "spare", 00:25:10.901 "progress": { 00:25:10.901 "blocks": 40960, 00:25:10.901 "percent": 64 00:25:10.901 } 00:25:10.901 }, 00:25:10.901 "base_bdevs_list": [ 00:25:10.901 { 00:25:10.901 "name": "spare", 00:25:10.901 "uuid": "c772fb58-9fa2-56d8-9f27-8649a11f7a6d", 00:25:10.901 "is_configured": true, 00:25:10.901 "data_offset": 2048, 00:25:10.901 "data_size": 63488 00:25:10.901 }, 00:25:10.901 { 00:25:10.901 "name": null, 00:25:10.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:10.901 "is_configured": false, 00:25:10.901 "data_offset": 2048, 00:25:10.901 "data_size": 63488 00:25:10.901 }, 00:25:10.901 { 00:25:10.901 "name": "BaseBdev3", 00:25:10.901 "uuid": "295254ab-4b9c-5d5a-8c93-bc81190c8300", 00:25:10.901 "is_configured": true, 00:25:10.901 "data_offset": 2048, 00:25:10.901 "data_size": 63488 00:25:10.901 }, 00:25:10.901 { 00:25:10.901 "name": "BaseBdev4", 00:25:10.901 "uuid": "ece2f03e-f2bc-5d2a-9140-01237e168514", 00:25:10.901 "is_configured": true, 00:25:10.901 "data_offset": 2048, 00:25:10.901 "data_size": 63488 00:25:10.901 } 00:25:10.901 ] 00:25:10.902 }' 00:25:10.902 00:39:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:10.902 00:39:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:10.902 00:39:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:10.902 00:39:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:10.902 00:39:04 -- bdev/bdev_raid.sh@657 -- # local timeout=567 00:25:10.902 00:39:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:10.902 00:39:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:10.902 00:39:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:10.902 00:39:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:10.902 00:39:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:10.902 00:39:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:10.902 00:39:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.902 00:39:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.160 00:39:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:11.160 "name": "raid_bdev1", 00:25:11.160 "uuid": "8a3dd883-630d-4360-abe3-a1648a9ca9fe", 00:25:11.160 "strip_size_kb": 0, 00:25:11.160 "state": "online", 00:25:11.160 "raid_level": "raid1", 00:25:11.160 "superblock": true, 00:25:11.160 "num_base_bdevs": 4, 00:25:11.160 "num_base_bdevs_discovered": 3, 00:25:11.160 "num_base_bdevs_operational": 3, 00:25:11.160 "process": { 00:25:11.160 "type": "rebuild", 00:25:11.160 "target": "spare", 00:25:11.160 "progress": { 00:25:11.160 "blocks": 49152, 00:25:11.160 "percent": 77 00:25:11.160 } 00:25:11.160 }, 00:25:11.160 "base_bdevs_list": [ 00:25:11.160 { 00:25:11.160 "name": "spare", 00:25:11.160 "uuid": "c772fb58-9fa2-56d8-9f27-8649a11f7a6d", 00:25:11.160 "is_configured": true, 00:25:11.160 "data_offset": 2048, 00:25:11.160 "data_size": 63488 00:25:11.160 }, 00:25:11.160 { 00:25:11.160 "name": null, 00:25:11.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.160 "is_configured": false, 00:25:11.160 "data_offset": 2048, 00:25:11.160 "data_size": 63488 00:25:11.160 }, 00:25:11.160 { 00:25:11.160 "name": "BaseBdev3", 00:25:11.160 "uuid": "295254ab-4b9c-5d5a-8c93-bc81190c8300", 00:25:11.160 "is_configured": true, 00:25:11.160 "data_offset": 2048, 00:25:11.160 "data_size": 63488 00:25:11.160 }, 00:25:11.160 { 00:25:11.160 "name": "BaseBdev4", 00:25:11.160 "uuid": "ece2f03e-f2bc-5d2a-9140-01237e168514", 00:25:11.160 "is_configured": true, 00:25:11.160 "data_offset": 2048, 00:25:11.160 "data_size": 63488 00:25:11.160 } 00:25:11.160 ] 00:25:11.160 }' 00:25:11.160 00:39:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:11.418 00:39:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:11.418 00:39:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:11.418 00:39:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:11.418 00:39:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:11.985 [2024-04-24 00:39:05.550580] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:11.985 [2024-04-24 00:39:05.550873] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:11.985 [2024-04-24 00:39:05.551188] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.552 00:39:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:12.552 00:39:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:12.552 00:39:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:12.552 00:39:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:12.552 00:39:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:12.552 00:39:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:12.552 00:39:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.552 00:39:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.552 00:39:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:12.553 "name": "raid_bdev1", 00:25:12.553 "uuid": "8a3dd883-630d-4360-abe3-a1648a9ca9fe", 00:25:12.553 "strip_size_kb": 0, 00:25:12.553 "state": "online", 00:25:12.553 "raid_level": "raid1", 00:25:12.553 "superblock": true, 00:25:12.553 "num_base_bdevs": 4, 00:25:12.553 "num_base_bdevs_discovered": 3, 00:25:12.553 "num_base_bdevs_operational": 3, 00:25:12.553 "base_bdevs_list": [ 00:25:12.553 { 00:25:12.553 "name": "spare", 00:25:12.553 "uuid": "c772fb58-9fa2-56d8-9f27-8649a11f7a6d", 00:25:12.553 "is_configured": true, 00:25:12.553 "data_offset": 2048, 00:25:12.553 "data_size": 63488 00:25:12.553 }, 00:25:12.553 { 00:25:12.553 "name": null, 00:25:12.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:12.553 "is_configured": false, 00:25:12.553 "data_offset": 2048, 00:25:12.553 "data_size": 63488 00:25:12.553 }, 00:25:12.553 { 00:25:12.553 "name": "BaseBdev3", 00:25:12.553 "uuid": "295254ab-4b9c-5d5a-8c93-bc81190c8300", 00:25:12.553 "is_configured": true, 00:25:12.553 "data_offset": 2048, 00:25:12.553 "data_size": 63488 00:25:12.553 }, 00:25:12.553 { 00:25:12.553 "name": "BaseBdev4", 00:25:12.553 "uuid": "ece2f03e-f2bc-5d2a-9140-01237e168514", 00:25:12.553 "is_configured": true, 00:25:12.553 "data_offset": 2048, 00:25:12.553 "data_size": 63488 00:25:12.553 } 00:25:12.553 ] 00:25:12.553 }' 00:25:12.553 00:39:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:12.811 00:39:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:12.811 00:39:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:12.811 00:39:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:12.811 00:39:06 -- bdev/bdev_raid.sh@660 -- # break 00:25:12.811 00:39:06 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:12.811 00:39:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:12.811 00:39:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:12.811 00:39:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:12.811 00:39:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:12.811 00:39:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.811 00:39:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:13.069 "name": "raid_bdev1", 00:25:13.069 "uuid": "8a3dd883-630d-4360-abe3-a1648a9ca9fe", 00:25:13.069 "strip_size_kb": 0, 00:25:13.069 "state": "online", 00:25:13.069 "raid_level": "raid1", 00:25:13.069 "superblock": true, 00:25:13.069 "num_base_bdevs": 4, 00:25:13.069 "num_base_bdevs_discovered": 3, 00:25:13.069 "num_base_bdevs_operational": 3, 00:25:13.069 "base_bdevs_list": [ 00:25:13.069 { 00:25:13.069 "name": "spare", 00:25:13.069 "uuid": "c772fb58-9fa2-56d8-9f27-8649a11f7a6d", 00:25:13.069 "is_configured": true, 00:25:13.069 "data_offset": 2048, 00:25:13.069 "data_size": 63488 00:25:13.069 }, 00:25:13.069 { 00:25:13.069 "name": null, 00:25:13.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.069 "is_configured": false, 00:25:13.069 "data_offset": 2048, 00:25:13.069 "data_size": 63488 00:25:13.069 }, 00:25:13.069 { 00:25:13.069 "name": "BaseBdev3", 00:25:13.069 "uuid": "295254ab-4b9c-5d5a-8c93-bc81190c8300", 00:25:13.069 "is_configured": true, 00:25:13.069 "data_offset": 2048, 00:25:13.069 "data_size": 63488 00:25:13.069 }, 00:25:13.069 { 00:25:13.069 "name": "BaseBdev4", 00:25:13.069 "uuid": "ece2f03e-f2bc-5d2a-9140-01237e168514", 00:25:13.069 "is_configured": true, 00:25:13.069 "data_offset": 2048, 00:25:13.069 "data_size": 63488 00:25:13.069 } 00:25:13.069 ] 00:25:13.069 }' 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.069 00:39:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.328 00:39:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:13.328 "name": "raid_bdev1", 00:25:13.328 "uuid": "8a3dd883-630d-4360-abe3-a1648a9ca9fe", 00:25:13.328 "strip_size_kb": 0, 00:25:13.328 "state": "online", 00:25:13.328 "raid_level": "raid1", 00:25:13.328 "superblock": true, 00:25:13.328 "num_base_bdevs": 4, 00:25:13.328 "num_base_bdevs_discovered": 3, 00:25:13.328 "num_base_bdevs_operational": 3, 00:25:13.328 "base_bdevs_list": [ 00:25:13.328 { 00:25:13.328 "name": "spare", 00:25:13.328 "uuid": "c772fb58-9fa2-56d8-9f27-8649a11f7a6d", 00:25:13.328 "is_configured": true, 00:25:13.328 "data_offset": 2048, 00:25:13.328 "data_size": 63488 00:25:13.328 }, 00:25:13.328 { 00:25:13.328 "name": null, 00:25:13.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.328 "is_configured": false, 00:25:13.328 "data_offset": 2048, 00:25:13.328 "data_size": 63488 00:25:13.328 }, 00:25:13.328 { 00:25:13.328 "name": "BaseBdev3", 00:25:13.328 "uuid": "295254ab-4b9c-5d5a-8c93-bc81190c8300", 00:25:13.328 "is_configured": true, 00:25:13.328 "data_offset": 2048, 00:25:13.328 "data_size": 63488 00:25:13.328 }, 00:25:13.328 { 00:25:13.328 "name": "BaseBdev4", 00:25:13.328 "uuid": "ece2f03e-f2bc-5d2a-9140-01237e168514", 00:25:13.328 "is_configured": true, 00:25:13.328 "data_offset": 2048, 00:25:13.328 "data_size": 63488 00:25:13.328 } 00:25:13.328 ] 00:25:13.328 }' 00:25:13.328 00:39:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:13.328 00:39:07 -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 00:39:07 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:14.261 [2024-04-24 00:39:08.024279] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:14.261 [2024-04-24 00:39:08.024482] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:14.261 [2024-04-24 00:39:08.024681] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:14.261 [2024-04-24 00:39:08.024884] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:14.261 [2024-04-24 00:39:08.024981] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:25:14.261 00:39:08 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:14.261 00:39:08 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.827 00:39:08 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:14.827 00:39:08 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:14.827 00:39:08 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:14.827 00:39:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:14.827 00:39:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:14.827 00:39:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:14.827 00:39:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:14.827 00:39:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:14.827 00:39:08 -- bdev/nbd_common.sh@12 -- # local i 00:25:14.827 00:39:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:14.827 00:39:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:14.827 00:39:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:14.827 /dev/nbd0 00:25:15.085 00:39:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:15.085 00:39:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:15.085 00:39:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:25:15.085 00:39:08 -- common/autotest_common.sh@855 -- # local i 00:25:15.085 00:39:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:15.085 00:39:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:15.085 00:39:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:25:15.085 00:39:08 -- common/autotest_common.sh@859 -- # break 00:25:15.085 00:39:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:15.085 00:39:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:15.085 00:39:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:15.085 1+0 records in 00:25:15.085 1+0 records out 00:25:15.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399788 s, 10.2 MB/s 00:25:15.085 00:39:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:15.085 00:39:08 -- common/autotest_common.sh@872 -- # size=4096 00:25:15.085 00:39:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:15.086 00:39:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:15.086 00:39:08 -- common/autotest_common.sh@875 -- # return 0 00:25:15.086 00:39:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:15.086 00:39:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:15.086 00:39:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:15.347 /dev/nbd1 00:25:15.347 00:39:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:15.347 00:39:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:15.347 00:39:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:25:15.347 00:39:08 -- common/autotest_common.sh@855 -- # local i 00:25:15.347 00:39:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:15.347 00:39:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:15.347 00:39:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:25:15.347 00:39:08 -- common/autotest_common.sh@859 -- # break 00:25:15.347 00:39:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:15.347 00:39:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:15.347 00:39:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:15.347 1+0 records in 00:25:15.347 1+0 records out 00:25:15.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509873 s, 8.0 MB/s 00:25:15.347 00:39:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:15.347 00:39:08 -- common/autotest_common.sh@872 -- # size=4096 00:25:15.347 00:39:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:15.347 00:39:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:15.347 00:39:09 -- common/autotest_common.sh@875 -- # return 0 00:25:15.347 00:39:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:15.347 00:39:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:15.347 00:39:09 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:15.610 00:39:09 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:15.610 00:39:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:15.610 00:39:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:15.610 00:39:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:15.610 00:39:09 -- bdev/nbd_common.sh@51 -- # local i 00:25:15.610 00:39:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:15.610 00:39:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:15.869 00:39:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:15.869 00:39:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:15.869 00:39:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:15.869 00:39:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:15.869 00:39:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:15.869 00:39:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:15.869 00:39:09 -- bdev/nbd_common.sh@41 -- # break 00:25:15.869 00:39:09 -- bdev/nbd_common.sh@45 -- # return 0 00:25:15.869 00:39:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:15.869 00:39:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:16.127 00:39:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:16.127 00:39:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:16.127 00:39:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:16.127 00:39:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:16.127 00:39:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:16.127 00:39:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:16.127 00:39:09 -- bdev/nbd_common.sh@41 -- # break 00:25:16.127 00:39:09 -- bdev/nbd_common.sh@45 -- # return 0 00:25:16.127 00:39:09 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:25:16.127 00:39:09 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:16.127 00:39:09 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:25:16.127 00:39:09 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:16.385 00:39:10 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:16.643 [2024-04-24 00:39:10.305409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:16.643 [2024-04-24 00:39:10.305655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.643 [2024-04-24 00:39:10.305823] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:16.643 [2024-04-24 00:39:10.305921] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.643 [2024-04-24 00:39:10.308727] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.643 [2024-04-24 00:39:10.308905] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:16.643 [2024-04-24 00:39:10.309156] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:16.643 [2024-04-24 00:39:10.309310] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:16.643 BaseBdev1 00:25:16.643 00:39:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:16.643 00:39:10 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:25:16.643 00:39:10 -- bdev/bdev_raid.sh@696 -- # continue 00:25:16.643 00:39:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:16.643 00:39:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:25:16.643 00:39:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:25:16.902 00:39:10 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:17.162 [2024-04-24 00:39:10.905711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:17.162 [2024-04-24 00:39:10.905990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.162 [2024-04-24 00:39:10.906144] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:17.162 [2024-04-24 00:39:10.906248] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.162 [2024-04-24 00:39:10.906831] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.162 [2024-04-24 00:39:10.907026] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:17.162 [2024-04-24 00:39:10.907255] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:25:17.162 [2024-04-24 00:39:10.907341] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:25:17.162 [2024-04-24 00:39:10.907449] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:17.162 [2024-04-24 00:39:10.907551] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:25:17.162 [2024-04-24 00:39:10.907710] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:17.162 BaseBdev3 00:25:17.162 00:39:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:17.162 00:39:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:25:17.162 00:39:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:25:17.421 00:39:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:17.680 [2024-04-24 00:39:11.389843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:17.680 [2024-04-24 00:39:11.390136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.680 [2024-04-24 00:39:11.390271] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:25:17.680 [2024-04-24 00:39:11.390391] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.680 [2024-04-24 00:39:11.390902] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.680 [2024-04-24 00:39:11.391085] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:17.680 [2024-04-24 00:39:11.391283] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:25:17.680 [2024-04-24 00:39:11.391384] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:17.680 BaseBdev4 00:25:17.680 00:39:11 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:17.989 00:39:11 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:18.255 [2024-04-24 00:39:11.885937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:18.255 [2024-04-24 00:39:11.886169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.255 [2024-04-24 00:39:11.886292] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:25:18.255 [2024-04-24 00:39:11.886403] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.255 [2024-04-24 00:39:11.886965] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.255 [2024-04-24 00:39:11.887131] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:18.255 [2024-04-24 00:39:11.887357] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:18.255 [2024-04-24 00:39:11.887510] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:18.255 spare 00:25:18.255 00:39:11 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:18.255 00:39:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:18.255 00:39:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:18.255 00:39:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:18.255 00:39:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:18.255 00:39:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:18.255 00:39:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:18.255 00:39:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:18.255 00:39:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:18.255 00:39:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:18.255 00:39:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.255 00:39:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.255 [2024-04-24 00:39:11.987723] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:25:18.255 [2024-04-24 00:39:11.987914] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:18.255 [2024-04-24 00:39:11.988131] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:25:18.255 [2024-04-24 00:39:11.988784] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:25:18.255 [2024-04-24 00:39:11.988897] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:25:18.255 [2024-04-24 00:39:11.989145] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:18.514 00:39:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:18.514 "name": "raid_bdev1", 00:25:18.514 "uuid": "8a3dd883-630d-4360-abe3-a1648a9ca9fe", 00:25:18.514 "strip_size_kb": 0, 00:25:18.514 "state": "online", 00:25:18.514 "raid_level": "raid1", 00:25:18.514 "superblock": true, 00:25:18.514 "num_base_bdevs": 4, 00:25:18.514 "num_base_bdevs_discovered": 3, 00:25:18.514 "num_base_bdevs_operational": 3, 00:25:18.514 "base_bdevs_list": [ 00:25:18.514 { 00:25:18.514 "name": "spare", 00:25:18.514 "uuid": "c772fb58-9fa2-56d8-9f27-8649a11f7a6d", 00:25:18.514 "is_configured": true, 00:25:18.514 "data_offset": 2048, 00:25:18.514 "data_size": 63488 00:25:18.514 }, 00:25:18.514 { 00:25:18.514 "name": null, 00:25:18.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.514 "is_configured": false, 00:25:18.514 "data_offset": 2048, 00:25:18.514 "data_size": 63488 00:25:18.514 }, 00:25:18.514 { 00:25:18.514 "name": "BaseBdev3", 00:25:18.514 "uuid": "295254ab-4b9c-5d5a-8c93-bc81190c8300", 00:25:18.514 "is_configured": true, 00:25:18.514 "data_offset": 2048, 00:25:18.514 "data_size": 63488 00:25:18.514 }, 00:25:18.514 { 00:25:18.514 "name": "BaseBdev4", 00:25:18.514 "uuid": "ece2f03e-f2bc-5d2a-9140-01237e168514", 00:25:18.514 "is_configured": true, 00:25:18.514 "data_offset": 2048, 00:25:18.514 "data_size": 63488 00:25:18.514 } 00:25:18.514 ] 00:25:18.514 }' 00:25:18.514 00:39:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:18.514 00:39:12 -- common/autotest_common.sh@10 -- # set +x 00:25:19.080 00:39:12 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:19.080 00:39:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:19.080 00:39:12 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:19.080 00:39:12 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:19.080 00:39:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:19.080 00:39:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.080 00:39:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.338 00:39:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:19.338 "name": "raid_bdev1", 00:25:19.338 "uuid": "8a3dd883-630d-4360-abe3-a1648a9ca9fe", 00:25:19.338 "strip_size_kb": 0, 00:25:19.338 "state": "online", 00:25:19.338 "raid_level": "raid1", 00:25:19.338 "superblock": true, 00:25:19.338 "num_base_bdevs": 4, 00:25:19.338 "num_base_bdevs_discovered": 3, 00:25:19.338 "num_base_bdevs_operational": 3, 00:25:19.338 "base_bdevs_list": [ 00:25:19.338 { 00:25:19.338 "name": "spare", 00:25:19.338 "uuid": "c772fb58-9fa2-56d8-9f27-8649a11f7a6d", 00:25:19.338 "is_configured": true, 00:25:19.338 "data_offset": 2048, 00:25:19.338 "data_size": 63488 00:25:19.338 }, 00:25:19.338 { 00:25:19.338 "name": null, 00:25:19.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.338 "is_configured": false, 00:25:19.338 "data_offset": 2048, 00:25:19.338 "data_size": 63488 00:25:19.338 }, 00:25:19.338 { 00:25:19.338 "name": "BaseBdev3", 00:25:19.338 "uuid": "295254ab-4b9c-5d5a-8c93-bc81190c8300", 00:25:19.338 "is_configured": true, 00:25:19.338 "data_offset": 2048, 00:25:19.338 "data_size": 63488 00:25:19.338 }, 00:25:19.338 { 00:25:19.338 "name": "BaseBdev4", 00:25:19.338 "uuid": "ece2f03e-f2bc-5d2a-9140-01237e168514", 00:25:19.338 "is_configured": true, 00:25:19.338 "data_offset": 2048, 00:25:19.338 "data_size": 63488 00:25:19.338 } 00:25:19.338 ] 00:25:19.338 }' 00:25:19.338 00:39:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:19.596 00:39:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:19.596 00:39:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:19.596 00:39:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:19.596 00:39:13 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.596 00:39:13 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:19.854 00:39:13 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:25:19.854 00:39:13 -- bdev/bdev_raid.sh@709 -- # killprocess 134084 00:25:19.854 00:39:13 -- common/autotest_common.sh@936 -- # '[' -z 134084 ']' 00:25:19.854 00:39:13 -- common/autotest_common.sh@940 -- # kill -0 134084 00:25:19.854 00:39:13 -- common/autotest_common.sh@941 -- # uname 00:25:19.854 00:39:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:19.854 00:39:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134084 00:25:19.854 killing process with pid 134084 00:25:19.854 Received shutdown signal, test time was about 60.000000 seconds 00:25:19.854 00:25:19.854 Latency(us) 00:25:19.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.854 =================================================================================================================== 00:25:19.854 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:19.854 00:39:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:19.854 00:39:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:19.854 00:39:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134084' 00:25:19.854 00:39:13 -- common/autotest_common.sh@955 -- # kill 134084 00:25:19.854 00:39:13 -- common/autotest_common.sh@960 -- # wait 134084 00:25:19.854 [2024-04-24 00:39:13.420447] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:19.854 [2024-04-24 00:39:13.420566] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:19.854 [2024-04-24 00:39:13.420648] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:19.854 [2024-04-24 00:39:13.420664] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:25:20.435 [2024-04-24 00:39:13.931160] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:21.809 ************************************ 00:25:21.809 END TEST raid_rebuild_test_sb 00:25:21.809 ************************************ 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:21.809 00:25:21.809 real 0m30.185s 00:25:21.809 user 0m43.223s 00:25:21.809 sys 0m5.196s 00:25:21.809 00:39:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:21.809 00:39:15 -- common/autotest_common.sh@10 -- # set +x 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:25:21.809 00:39:15 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:25:21.809 00:39:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:21.809 00:39:15 -- common/autotest_common.sh@10 -- # set +x 00:25:21.809 ************************************ 00:25:21.809 START TEST raid_rebuild_test_io 00:25:21.809 ************************************ 00:25:21.809 00:39:15 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 false true 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:25:21.809 00:39:15 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:21.810 00:39:15 -- bdev/bdev_raid.sh@544 -- # raid_pid=134779 00:25:21.810 00:39:15 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134779 /var/tmp/spdk-raid.sock 00:25:21.810 00:39:15 -- common/autotest_common.sh@817 -- # '[' -z 134779 ']' 00:25:21.810 00:39:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:21.810 00:39:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:21.810 00:39:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:21.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:21.810 00:39:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:21.810 00:39:15 -- common/autotest_common.sh@10 -- # set +x 00:25:21.810 [2024-04-24 00:39:15.421618] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:25:21.810 [2024-04-24 00:39:15.421944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134779 ] 00:25:21.810 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:21.810 Zero copy mechanism will not be used. 00:25:21.810 [2024-04-24 00:39:15.576690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.067 [2024-04-24 00:39:15.782003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.324 [2024-04-24 00:39:15.995011] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:22.970 00:39:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:22.970 00:39:16 -- common/autotest_common.sh@850 -- # return 0 00:25:22.970 00:39:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:22.970 00:39:16 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:22.970 00:39:16 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:22.970 BaseBdev1 00:25:22.970 00:39:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:22.970 00:39:16 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:22.970 00:39:16 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:23.363 BaseBdev2 00:25:23.363 00:39:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:23.363 00:39:17 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:23.363 00:39:17 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:23.620 BaseBdev3 00:25:23.620 00:39:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:23.879 00:39:17 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:23.879 00:39:17 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:24.137 BaseBdev4 00:25:24.137 00:39:17 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:24.137 spare_malloc 00:25:24.395 00:39:17 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:24.395 spare_delay 00:25:24.395 00:39:18 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:24.652 [2024-04-24 00:39:18.307436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:24.652 [2024-04-24 00:39:18.307743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:24.652 [2024-04-24 00:39:18.307823] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:25:24.652 [2024-04-24 00:39:18.308070] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:24.652 [2024-04-24 00:39:18.310832] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:24.652 [2024-04-24 00:39:18.311040] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:24.652 spare 00:25:24.652 00:39:18 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:24.909 [2024-04-24 00:39:18.507567] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:24.909 [2024-04-24 00:39:18.509866] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:24.909 [2024-04-24 00:39:18.510056] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:24.909 [2024-04-24 00:39:18.510125] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:24.909 [2024-04-24 00:39:18.510268] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:25:24.909 [2024-04-24 00:39:18.510309] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:24.909 [2024-04-24 00:39:18.510561] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:25:24.909 [2024-04-24 00:39:18.511024] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:25:24.909 [2024-04-24 00:39:18.511131] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:25:24.910 [2024-04-24 00:39:18.511366] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:24.910 00:39:18 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:24.910 00:39:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:24.910 00:39:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:24.910 00:39:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:24.910 00:39:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:24.910 00:39:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:24.910 00:39:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:24.910 00:39:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:24.910 00:39:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:24.910 00:39:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:24.910 00:39:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.910 00:39:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.167 00:39:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:25.167 "name": "raid_bdev1", 00:25:25.167 "uuid": "9652cd1c-370a-43ae-a50d-526c073be777", 00:25:25.167 "strip_size_kb": 0, 00:25:25.167 "state": "online", 00:25:25.167 "raid_level": "raid1", 00:25:25.167 "superblock": false, 00:25:25.167 "num_base_bdevs": 4, 00:25:25.167 "num_base_bdevs_discovered": 4, 00:25:25.167 "num_base_bdevs_operational": 4, 00:25:25.167 "base_bdevs_list": [ 00:25:25.167 { 00:25:25.167 "name": "BaseBdev1", 00:25:25.167 "uuid": "592c3ff5-eb81-4483-bf94-9fc8c46ec26b", 00:25:25.167 "is_configured": true, 00:25:25.167 "data_offset": 0, 00:25:25.167 "data_size": 65536 00:25:25.167 }, 00:25:25.167 { 00:25:25.167 "name": "BaseBdev2", 00:25:25.167 "uuid": "a85255ea-0f8f-4d56-ae05-679dff4cf5db", 00:25:25.167 "is_configured": true, 00:25:25.167 "data_offset": 0, 00:25:25.167 "data_size": 65536 00:25:25.167 }, 00:25:25.167 { 00:25:25.167 "name": "BaseBdev3", 00:25:25.167 "uuid": "7598562f-aabc-4603-b276-3c8b72911ca1", 00:25:25.167 "is_configured": true, 00:25:25.167 "data_offset": 0, 00:25:25.167 "data_size": 65536 00:25:25.167 }, 00:25:25.167 { 00:25:25.167 "name": "BaseBdev4", 00:25:25.167 "uuid": "070e1de0-67c0-4ebd-bfb2-300b6e194bcd", 00:25:25.167 "is_configured": true, 00:25:25.167 "data_offset": 0, 00:25:25.167 "data_size": 65536 00:25:25.167 } 00:25:25.167 ] 00:25:25.167 }' 00:25:25.167 00:39:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:25.167 00:39:18 -- common/autotest_common.sh@10 -- # set +x 00:25:25.733 00:39:19 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:25.733 00:39:19 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:25.990 [2024-04-24 00:39:19.652198] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:25.990 00:39:19 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:25:25.990 00:39:19 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.990 00:39:19 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:26.248 00:39:19 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:25:26.248 00:39:19 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:25:26.248 00:39:19 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:26.248 00:39:19 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:26.248 [2024-04-24 00:39:19.972350] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:25:26.248 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:26.248 Zero copy mechanism will not be used. 00:25:26.248 Running I/O for 60 seconds... 00:25:26.506 [2024-04-24 00:39:20.078308] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:26.506 [2024-04-24 00:39:20.085355] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:25:26.506 00:39:20 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:26.506 00:39:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:26.506 00:39:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:26.506 00:39:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:26.506 00:39:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:26.506 00:39:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:26.506 00:39:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:26.506 00:39:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:26.506 00:39:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:26.506 00:39:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:26.506 00:39:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.506 00:39:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.764 00:39:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:26.764 "name": "raid_bdev1", 00:25:26.764 "uuid": "9652cd1c-370a-43ae-a50d-526c073be777", 00:25:26.764 "strip_size_kb": 0, 00:25:26.764 "state": "online", 00:25:26.764 "raid_level": "raid1", 00:25:26.764 "superblock": false, 00:25:26.764 "num_base_bdevs": 4, 00:25:26.764 "num_base_bdevs_discovered": 3, 00:25:26.764 "num_base_bdevs_operational": 3, 00:25:26.764 "base_bdevs_list": [ 00:25:26.764 { 00:25:26.764 "name": null, 00:25:26.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.764 "is_configured": false, 00:25:26.764 "data_offset": 0, 00:25:26.764 "data_size": 65536 00:25:26.764 }, 00:25:26.764 { 00:25:26.764 "name": "BaseBdev2", 00:25:26.764 "uuid": "a85255ea-0f8f-4d56-ae05-679dff4cf5db", 00:25:26.764 "is_configured": true, 00:25:26.764 "data_offset": 0, 00:25:26.764 "data_size": 65536 00:25:26.764 }, 00:25:26.764 { 00:25:26.764 "name": "BaseBdev3", 00:25:26.764 "uuid": "7598562f-aabc-4603-b276-3c8b72911ca1", 00:25:26.764 "is_configured": true, 00:25:26.764 "data_offset": 0, 00:25:26.764 "data_size": 65536 00:25:26.764 }, 00:25:26.764 { 00:25:26.764 "name": "BaseBdev4", 00:25:26.764 "uuid": "070e1de0-67c0-4ebd-bfb2-300b6e194bcd", 00:25:26.764 "is_configured": true, 00:25:26.764 "data_offset": 0, 00:25:26.764 "data_size": 65536 00:25:26.764 } 00:25:26.764 ] 00:25:26.764 }' 00:25:26.764 00:39:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:26.764 00:39:20 -- common/autotest_common.sh@10 -- # set +x 00:25:27.329 00:39:21 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:27.588 [2024-04-24 00:39:21.302932] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:27.588 [2024-04-24 00:39:21.303233] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:27.588 00:39:21 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:27.847 [2024-04-24 00:39:21.387446] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:27.847 [2024-04-24 00:39:21.389832] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:27.847 [2024-04-24 00:39:21.508945] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:27.847 [2024-04-24 00:39:21.509703] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:27.847 [2024-04-24 00:39:21.629550] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:27.847 [2024-04-24 00:39:21.629978] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:28.413 [2024-04-24 00:39:21.981215] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:28.413 [2024-04-24 00:39:21.982757] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:28.672 00:39:22 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:28.672 00:39:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:28.672 00:39:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:28.672 00:39:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:28.672 00:39:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:28.672 00:39:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.672 00:39:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.672 [2024-04-24 00:39:22.381890] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:28.672 [2024-04-24 00:39:22.382545] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:28.931 00:39:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:28.931 "name": "raid_bdev1", 00:25:28.931 "uuid": "9652cd1c-370a-43ae-a50d-526c073be777", 00:25:28.931 "strip_size_kb": 0, 00:25:28.931 "state": "online", 00:25:28.931 "raid_level": "raid1", 00:25:28.931 "superblock": false, 00:25:28.931 "num_base_bdevs": 4, 00:25:28.931 "num_base_bdevs_discovered": 4, 00:25:28.931 "num_base_bdevs_operational": 4, 00:25:28.931 "process": { 00:25:28.931 "type": "rebuild", 00:25:28.931 "target": "spare", 00:25:28.931 "progress": { 00:25:28.931 "blocks": 14336, 00:25:28.931 "percent": 21 00:25:28.931 } 00:25:28.931 }, 00:25:28.931 "base_bdevs_list": [ 00:25:28.931 { 00:25:28.931 "name": "spare", 00:25:28.931 "uuid": "401a7d8a-4128-5f5a-8d49-971348f93775", 00:25:28.931 "is_configured": true, 00:25:28.931 "data_offset": 0, 00:25:28.931 "data_size": 65536 00:25:28.931 }, 00:25:28.931 { 00:25:28.931 "name": "BaseBdev2", 00:25:28.931 "uuid": "a85255ea-0f8f-4d56-ae05-679dff4cf5db", 00:25:28.931 "is_configured": true, 00:25:28.931 "data_offset": 0, 00:25:28.931 "data_size": 65536 00:25:28.931 }, 00:25:28.931 { 00:25:28.931 "name": "BaseBdev3", 00:25:28.931 "uuid": "7598562f-aabc-4603-b276-3c8b72911ca1", 00:25:28.931 "is_configured": true, 00:25:28.931 "data_offset": 0, 00:25:28.931 "data_size": 65536 00:25:28.931 }, 00:25:28.931 { 00:25:28.931 "name": "BaseBdev4", 00:25:28.931 "uuid": "070e1de0-67c0-4ebd-bfb2-300b6e194bcd", 00:25:28.931 "is_configured": true, 00:25:28.931 "data_offset": 0, 00:25:28.931 "data_size": 65536 00:25:28.931 } 00:25:28.931 ] 00:25:28.931 }' 00:25:28.931 00:39:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:28.931 [2024-04-24 00:39:22.630059] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:28.931 00:39:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:28.931 00:39:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:28.931 00:39:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:28.931 00:39:22 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:29.189 [2024-04-24 00:39:22.966894] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:29.189 [2024-04-24 00:39:22.981030] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:29.448 [2024-04-24 00:39:23.083485] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:29.448 [2024-04-24 00:39:23.101685] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.448 [2024-04-24 00:39:23.128007] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:25:29.448 00:39:23 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:29.448 00:39:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:29.448 00:39:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:29.448 00:39:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:29.448 00:39:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:29.448 00:39:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:29.448 00:39:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:29.448 00:39:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:29.448 00:39:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:29.448 00:39:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:29.448 00:39:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.448 00:39:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.743 00:39:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:29.743 "name": "raid_bdev1", 00:25:29.743 "uuid": "9652cd1c-370a-43ae-a50d-526c073be777", 00:25:29.743 "strip_size_kb": 0, 00:25:29.743 "state": "online", 00:25:29.743 "raid_level": "raid1", 00:25:29.743 "superblock": false, 00:25:29.743 "num_base_bdevs": 4, 00:25:29.743 "num_base_bdevs_discovered": 3, 00:25:29.743 "num_base_bdevs_operational": 3, 00:25:29.743 "base_bdevs_list": [ 00:25:29.743 { 00:25:29.743 "name": null, 00:25:29.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.743 "is_configured": false, 00:25:29.743 "data_offset": 0, 00:25:29.743 "data_size": 65536 00:25:29.743 }, 00:25:29.743 { 00:25:29.743 "name": "BaseBdev2", 00:25:29.743 "uuid": "a85255ea-0f8f-4d56-ae05-679dff4cf5db", 00:25:29.743 "is_configured": true, 00:25:29.743 "data_offset": 0, 00:25:29.743 "data_size": 65536 00:25:29.743 }, 00:25:29.743 { 00:25:29.743 "name": "BaseBdev3", 00:25:29.743 "uuid": "7598562f-aabc-4603-b276-3c8b72911ca1", 00:25:29.743 "is_configured": true, 00:25:29.743 "data_offset": 0, 00:25:29.743 "data_size": 65536 00:25:29.743 }, 00:25:29.743 { 00:25:29.743 "name": "BaseBdev4", 00:25:29.743 "uuid": "070e1de0-67c0-4ebd-bfb2-300b6e194bcd", 00:25:29.743 "is_configured": true, 00:25:29.743 "data_offset": 0, 00:25:29.743 "data_size": 65536 00:25:29.743 } 00:25:29.743 ] 00:25:29.743 }' 00:25:29.743 00:39:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:29.743 00:39:23 -- common/autotest_common.sh@10 -- # set +x 00:25:30.681 00:39:24 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:30.681 00:39:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:30.681 00:39:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:30.681 00:39:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:30.681 00:39:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:30.681 00:39:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.681 00:39:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.681 00:39:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:30.681 "name": "raid_bdev1", 00:25:30.681 "uuid": "9652cd1c-370a-43ae-a50d-526c073be777", 00:25:30.681 "strip_size_kb": 0, 00:25:30.681 "state": "online", 00:25:30.681 "raid_level": "raid1", 00:25:30.681 "superblock": false, 00:25:30.681 "num_base_bdevs": 4, 00:25:30.681 "num_base_bdevs_discovered": 3, 00:25:30.681 "num_base_bdevs_operational": 3, 00:25:30.681 "base_bdevs_list": [ 00:25:30.681 { 00:25:30.681 "name": null, 00:25:30.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.681 "is_configured": false, 00:25:30.681 "data_offset": 0, 00:25:30.681 "data_size": 65536 00:25:30.681 }, 00:25:30.681 { 00:25:30.681 "name": "BaseBdev2", 00:25:30.681 "uuid": "a85255ea-0f8f-4d56-ae05-679dff4cf5db", 00:25:30.681 "is_configured": true, 00:25:30.681 "data_offset": 0, 00:25:30.681 "data_size": 65536 00:25:30.681 }, 00:25:30.681 { 00:25:30.681 "name": "BaseBdev3", 00:25:30.681 "uuid": "7598562f-aabc-4603-b276-3c8b72911ca1", 00:25:30.681 "is_configured": true, 00:25:30.681 "data_offset": 0, 00:25:30.681 "data_size": 65536 00:25:30.681 }, 00:25:30.681 { 00:25:30.681 "name": "BaseBdev4", 00:25:30.681 "uuid": "070e1de0-67c0-4ebd-bfb2-300b6e194bcd", 00:25:30.681 "is_configured": true, 00:25:30.681 "data_offset": 0, 00:25:30.681 "data_size": 65536 00:25:30.681 } 00:25:30.681 ] 00:25:30.681 }' 00:25:30.681 00:39:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:30.681 00:39:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:30.681 00:39:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:30.681 00:39:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:30.681 00:39:24 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:30.940 [2024-04-24 00:39:24.625223] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:30.940 [2024-04-24 00:39:24.625482] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:30.940 00:39:24 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:30.940 [2024-04-24 00:39:24.674905] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:30.940 [2024-04-24 00:39:24.676947] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:31.198 [2024-04-24 00:39:24.787358] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:31.198 [2024-04-24 00:39:24.788024] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:31.198 [2024-04-24 00:39:24.912861] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:31.198 [2024-04-24 00:39:24.913766] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:31.769 [2024-04-24 00:39:25.366221] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:32.027 00:39:25 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:32.027 00:39:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:32.027 00:39:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:32.027 00:39:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:32.027 00:39:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:32.027 00:39:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.027 00:39:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.027 [2024-04-24 00:39:25.726111] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:32.285 00:39:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:32.285 "name": "raid_bdev1", 00:25:32.285 "uuid": "9652cd1c-370a-43ae-a50d-526c073be777", 00:25:32.285 "strip_size_kb": 0, 00:25:32.285 "state": "online", 00:25:32.285 "raid_level": "raid1", 00:25:32.285 "superblock": false, 00:25:32.285 "num_base_bdevs": 4, 00:25:32.285 "num_base_bdevs_discovered": 4, 00:25:32.285 "num_base_bdevs_operational": 4, 00:25:32.285 "process": { 00:25:32.285 "type": "rebuild", 00:25:32.285 "target": "spare", 00:25:32.285 "progress": { 00:25:32.285 "blocks": 16384, 00:25:32.285 "percent": 25 00:25:32.285 } 00:25:32.285 }, 00:25:32.285 "base_bdevs_list": [ 00:25:32.285 { 00:25:32.285 "name": "spare", 00:25:32.285 "uuid": "401a7d8a-4128-5f5a-8d49-971348f93775", 00:25:32.285 "is_configured": true, 00:25:32.285 "data_offset": 0, 00:25:32.285 "data_size": 65536 00:25:32.285 }, 00:25:32.285 { 00:25:32.285 "name": "BaseBdev2", 00:25:32.285 "uuid": "a85255ea-0f8f-4d56-ae05-679dff4cf5db", 00:25:32.285 "is_configured": true, 00:25:32.285 "data_offset": 0, 00:25:32.285 "data_size": 65536 00:25:32.285 }, 00:25:32.285 { 00:25:32.285 "name": "BaseBdev3", 00:25:32.285 "uuid": "7598562f-aabc-4603-b276-3c8b72911ca1", 00:25:32.285 "is_configured": true, 00:25:32.285 "data_offset": 0, 00:25:32.285 "data_size": 65536 00:25:32.285 }, 00:25:32.285 { 00:25:32.285 "name": "BaseBdev4", 00:25:32.285 "uuid": "070e1de0-67c0-4ebd-bfb2-300b6e194bcd", 00:25:32.285 "is_configured": true, 00:25:32.285 "data_offset": 0, 00:25:32.285 "data_size": 65536 00:25:32.285 } 00:25:32.285 ] 00:25:32.285 }' 00:25:32.285 00:39:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:32.285 00:39:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:32.285 00:39:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:32.285 00:39:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:32.285 00:39:26 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:25:32.285 00:39:26 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:32.285 00:39:26 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:25:32.285 00:39:26 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:25:32.285 00:39:26 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:32.285 [2024-04-24 00:39:26.051609] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:32.543 [2024-04-24 00:39:26.195922] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:32.800 [2024-04-24 00:39:26.393380] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ba0 00:25:32.800 [2024-04-24 00:39:26.393628] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005e10 00:25:32.800 00:39:26 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:25:32.800 00:39:26 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:25:32.800 00:39:26 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:32.800 00:39:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:32.800 00:39:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:32.800 00:39:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:32.800 00:39:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:32.800 00:39:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.800 00:39:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:33.058 "name": "raid_bdev1", 00:25:33.058 "uuid": "9652cd1c-370a-43ae-a50d-526c073be777", 00:25:33.058 "strip_size_kb": 0, 00:25:33.058 "state": "online", 00:25:33.058 "raid_level": "raid1", 00:25:33.058 "superblock": false, 00:25:33.058 "num_base_bdevs": 4, 00:25:33.058 "num_base_bdevs_discovered": 3, 00:25:33.058 "num_base_bdevs_operational": 3, 00:25:33.058 "process": { 00:25:33.058 "type": "rebuild", 00:25:33.058 "target": "spare", 00:25:33.058 "progress": { 00:25:33.058 "blocks": 28672, 00:25:33.058 "percent": 43 00:25:33.058 } 00:25:33.058 }, 00:25:33.058 "base_bdevs_list": [ 00:25:33.058 { 00:25:33.058 "name": "spare", 00:25:33.058 "uuid": "401a7d8a-4128-5f5a-8d49-971348f93775", 00:25:33.058 "is_configured": true, 00:25:33.058 "data_offset": 0, 00:25:33.058 "data_size": 65536 00:25:33.058 }, 00:25:33.058 { 00:25:33.058 "name": null, 00:25:33.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.058 "is_configured": false, 00:25:33.058 "data_offset": 0, 00:25:33.058 "data_size": 65536 00:25:33.058 }, 00:25:33.058 { 00:25:33.058 "name": "BaseBdev3", 00:25:33.058 "uuid": "7598562f-aabc-4603-b276-3c8b72911ca1", 00:25:33.058 "is_configured": true, 00:25:33.058 "data_offset": 0, 00:25:33.058 "data_size": 65536 00:25:33.058 }, 00:25:33.058 { 00:25:33.058 "name": "BaseBdev4", 00:25:33.058 "uuid": "070e1de0-67c0-4ebd-bfb2-300b6e194bcd", 00:25:33.058 "is_configured": true, 00:25:33.058 "data_offset": 0, 00:25:33.058 "data_size": 65536 00:25:33.058 } 00:25:33.058 ] 00:25:33.058 }' 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@657 -- # local timeout=589 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.058 00:39:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.315 [2024-04-24 00:39:26.856639] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:33.315 00:39:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:33.315 "name": "raid_bdev1", 00:25:33.315 "uuid": "9652cd1c-370a-43ae-a50d-526c073be777", 00:25:33.315 "strip_size_kb": 0, 00:25:33.315 "state": "online", 00:25:33.315 "raid_level": "raid1", 00:25:33.315 "superblock": false, 00:25:33.315 "num_base_bdevs": 4, 00:25:33.315 "num_base_bdevs_discovered": 3, 00:25:33.315 "num_base_bdevs_operational": 3, 00:25:33.315 "process": { 00:25:33.315 "type": "rebuild", 00:25:33.315 "target": "spare", 00:25:33.315 "progress": { 00:25:33.315 "blocks": 32768, 00:25:33.315 "percent": 50 00:25:33.315 } 00:25:33.315 }, 00:25:33.315 "base_bdevs_list": [ 00:25:33.315 { 00:25:33.315 "name": "spare", 00:25:33.315 "uuid": "401a7d8a-4128-5f5a-8d49-971348f93775", 00:25:33.315 "is_configured": true, 00:25:33.315 "data_offset": 0, 00:25:33.315 "data_size": 65536 00:25:33.315 }, 00:25:33.315 { 00:25:33.315 "name": null, 00:25:33.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.315 "is_configured": false, 00:25:33.315 "data_offset": 0, 00:25:33.315 "data_size": 65536 00:25:33.315 }, 00:25:33.315 { 00:25:33.315 "name": "BaseBdev3", 00:25:33.315 "uuid": "7598562f-aabc-4603-b276-3c8b72911ca1", 00:25:33.315 "is_configured": true, 00:25:33.315 "data_offset": 0, 00:25:33.315 "data_size": 65536 00:25:33.315 }, 00:25:33.315 { 00:25:33.315 "name": "BaseBdev4", 00:25:33.315 "uuid": "070e1de0-67c0-4ebd-bfb2-300b6e194bcd", 00:25:33.315 "is_configured": true, 00:25:33.315 "data_offset": 0, 00:25:33.315 "data_size": 65536 00:25:33.315 } 00:25:33.315 ] 00:25:33.315 }' 00:25:33.315 00:39:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:33.315 00:39:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:33.316 00:39:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:33.316 00:39:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:33.316 00:39:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:33.316 [2024-04-24 00:39:27.079043] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:33.881 [2024-04-24 00:39:27.414316] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:25:33.881 [2024-04-24 00:39:27.415549] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:25:34.466 [2024-04-24 00:39:27.938472] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:25:34.466 00:39:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:34.466 00:39:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:34.466 00:39:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:34.466 00:39:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:34.466 00:39:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:34.466 00:39:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:34.466 00:39:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.466 00:39:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.724 00:39:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:34.724 "name": "raid_bdev1", 00:25:34.724 "uuid": "9652cd1c-370a-43ae-a50d-526c073be777", 00:25:34.724 "strip_size_kb": 0, 00:25:34.724 "state": "online", 00:25:34.724 "raid_level": "raid1", 00:25:34.724 "superblock": false, 00:25:34.724 "num_base_bdevs": 4, 00:25:34.724 "num_base_bdevs_discovered": 3, 00:25:34.724 "num_base_bdevs_operational": 3, 00:25:34.724 "process": { 00:25:34.724 "type": "rebuild", 00:25:34.724 "target": "spare", 00:25:34.724 "progress": { 00:25:34.724 "blocks": 51200, 00:25:34.724 "percent": 78 00:25:34.724 } 00:25:34.724 }, 00:25:34.724 "base_bdevs_list": [ 00:25:34.724 { 00:25:34.725 "name": "spare", 00:25:34.725 "uuid": "401a7d8a-4128-5f5a-8d49-971348f93775", 00:25:34.725 "is_configured": true, 00:25:34.725 "data_offset": 0, 00:25:34.725 "data_size": 65536 00:25:34.725 }, 00:25:34.725 { 00:25:34.725 "name": null, 00:25:34.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:34.725 "is_configured": false, 00:25:34.725 "data_offset": 0, 00:25:34.725 "data_size": 65536 00:25:34.725 }, 00:25:34.725 { 00:25:34.725 "name": "BaseBdev3", 00:25:34.725 "uuid": "7598562f-aabc-4603-b276-3c8b72911ca1", 00:25:34.725 "is_configured": true, 00:25:34.725 "data_offset": 0, 00:25:34.725 "data_size": 65536 00:25:34.725 }, 00:25:34.725 { 00:25:34.725 "name": "BaseBdev4", 00:25:34.725 "uuid": "070e1de0-67c0-4ebd-bfb2-300b6e194bcd", 00:25:34.725 "is_configured": true, 00:25:34.725 "data_offset": 0, 00:25:34.725 "data_size": 65536 00:25:34.725 } 00:25:34.725 ] 00:25:34.725 }' 00:25:34.725 00:39:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:34.725 00:39:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:34.725 00:39:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:34.725 00:39:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:34.725 00:39:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:34.983 [2024-04-24 00:39:28.719090] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:25:35.550 [2024-04-24 00:39:29.053153] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:35.550 [2024-04-24 00:39:29.159122] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:35.550 [2024-04-24 00:39:29.162058] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:35.809 00:39:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:35.809 00:39:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:35.809 00:39:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:35.809 00:39:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:35.809 00:39:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:35.809 00:39:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:35.809 00:39:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.809 00:39:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.068 00:39:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:36.068 "name": "raid_bdev1", 00:25:36.068 "uuid": "9652cd1c-370a-43ae-a50d-526c073be777", 00:25:36.068 "strip_size_kb": 0, 00:25:36.068 "state": "online", 00:25:36.068 "raid_level": "raid1", 00:25:36.068 "superblock": false, 00:25:36.068 "num_base_bdevs": 4, 00:25:36.068 "num_base_bdevs_discovered": 3, 00:25:36.068 "num_base_bdevs_operational": 3, 00:25:36.068 "base_bdevs_list": [ 00:25:36.068 { 00:25:36.068 "name": "spare", 00:25:36.068 "uuid": "401a7d8a-4128-5f5a-8d49-971348f93775", 00:25:36.068 "is_configured": true, 00:25:36.068 "data_offset": 0, 00:25:36.068 "data_size": 65536 00:25:36.068 }, 00:25:36.068 { 00:25:36.068 "name": null, 00:25:36.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.068 "is_configured": false, 00:25:36.068 "data_offset": 0, 00:25:36.068 "data_size": 65536 00:25:36.068 }, 00:25:36.068 { 00:25:36.068 "name": "BaseBdev3", 00:25:36.068 "uuid": "7598562f-aabc-4603-b276-3c8b72911ca1", 00:25:36.068 "is_configured": true, 00:25:36.068 "data_offset": 0, 00:25:36.068 "data_size": 65536 00:25:36.068 }, 00:25:36.068 { 00:25:36.068 "name": "BaseBdev4", 00:25:36.068 "uuid": "070e1de0-67c0-4ebd-bfb2-300b6e194bcd", 00:25:36.068 "is_configured": true, 00:25:36.068 "data_offset": 0, 00:25:36.068 "data_size": 65536 00:25:36.068 } 00:25:36.068 ] 00:25:36.068 }' 00:25:36.068 00:39:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:36.068 00:39:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:36.068 00:39:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:36.068 00:39:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:36.068 00:39:29 -- bdev/bdev_raid.sh@660 -- # break 00:25:36.068 00:39:29 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:36.068 00:39:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:36.069 00:39:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:36.069 00:39:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:36.069 00:39:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:36.069 00:39:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.069 00:39:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.328 00:39:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:36.328 "name": "raid_bdev1", 00:25:36.328 "uuid": "9652cd1c-370a-43ae-a50d-526c073be777", 00:25:36.328 "strip_size_kb": 0, 00:25:36.328 "state": "online", 00:25:36.328 "raid_level": "raid1", 00:25:36.328 "superblock": false, 00:25:36.328 "num_base_bdevs": 4, 00:25:36.328 "num_base_bdevs_discovered": 3, 00:25:36.328 "num_base_bdevs_operational": 3, 00:25:36.328 "base_bdevs_list": [ 00:25:36.328 { 00:25:36.328 "name": "spare", 00:25:36.328 "uuid": "401a7d8a-4128-5f5a-8d49-971348f93775", 00:25:36.328 "is_configured": true, 00:25:36.328 "data_offset": 0, 00:25:36.328 "data_size": 65536 00:25:36.328 }, 00:25:36.328 { 00:25:36.328 "name": null, 00:25:36.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.328 "is_configured": false, 00:25:36.328 "data_offset": 0, 00:25:36.328 "data_size": 65536 00:25:36.328 }, 00:25:36.328 { 00:25:36.328 "name": "BaseBdev3", 00:25:36.328 "uuid": "7598562f-aabc-4603-b276-3c8b72911ca1", 00:25:36.328 "is_configured": true, 00:25:36.328 "data_offset": 0, 00:25:36.328 "data_size": 65536 00:25:36.328 }, 00:25:36.328 { 00:25:36.328 "name": "BaseBdev4", 00:25:36.328 "uuid": "070e1de0-67c0-4ebd-bfb2-300b6e194bcd", 00:25:36.328 "is_configured": true, 00:25:36.328 "data_offset": 0, 00:25:36.328 "data_size": 65536 00:25:36.328 } 00:25:36.328 ] 00:25:36.328 }' 00:25:36.328 00:39:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.328 00:39:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.601 00:39:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:36.601 "name": "raid_bdev1", 00:25:36.601 "uuid": "9652cd1c-370a-43ae-a50d-526c073be777", 00:25:36.601 "strip_size_kb": 0, 00:25:36.601 "state": "online", 00:25:36.601 "raid_level": "raid1", 00:25:36.601 "superblock": false, 00:25:36.601 "num_base_bdevs": 4, 00:25:36.601 "num_base_bdevs_discovered": 3, 00:25:36.601 "num_base_bdevs_operational": 3, 00:25:36.601 "base_bdevs_list": [ 00:25:36.601 { 00:25:36.601 "name": "spare", 00:25:36.601 "uuid": "401a7d8a-4128-5f5a-8d49-971348f93775", 00:25:36.601 "is_configured": true, 00:25:36.601 "data_offset": 0, 00:25:36.601 "data_size": 65536 00:25:36.601 }, 00:25:36.601 { 00:25:36.601 "name": null, 00:25:36.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.601 "is_configured": false, 00:25:36.601 "data_offset": 0, 00:25:36.601 "data_size": 65536 00:25:36.601 }, 00:25:36.601 { 00:25:36.601 "name": "BaseBdev3", 00:25:36.601 "uuid": "7598562f-aabc-4603-b276-3c8b72911ca1", 00:25:36.601 "is_configured": true, 00:25:36.601 "data_offset": 0, 00:25:36.601 "data_size": 65536 00:25:36.601 }, 00:25:36.601 { 00:25:36.601 "name": "BaseBdev4", 00:25:36.601 "uuid": "070e1de0-67c0-4ebd-bfb2-300b6e194bcd", 00:25:36.601 "is_configured": true, 00:25:36.601 "data_offset": 0, 00:25:36.601 "data_size": 65536 00:25:36.601 } 00:25:36.601 ] 00:25:36.601 }' 00:25:36.601 00:39:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:36.601 00:39:30 -- common/autotest_common.sh@10 -- # set +x 00:25:37.167 00:39:30 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:37.426 [2024-04-24 00:39:31.004468] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:37.426 [2024-04-24 00:39:31.004678] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:37.426 00:25:37.426 Latency(us) 00:25:37.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.426 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:37.426 raid_bdev1 : 11.04 107.64 322.92 0.00 0.00 13038.14 323.78 116841.33 00:25:37.426 =================================================================================================================== 00:25:37.426 Total : 107.64 322.92 0.00 0.00 13038.14 323.78 116841.33 00:25:37.426 [2024-04-24 00:39:31.036159] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.426 [2024-04-24 00:39:31.036363] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:37.426 [2024-04-24 00:39:31.036512] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:37.426 0 00:25:37.426 [2024-04-24 00:39:31.036626] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:25:37.426 00:39:31 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:37.426 00:39:31 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.685 00:39:31 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:37.685 00:39:31 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:25:37.685 00:39:31 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:37.685 00:39:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:37.685 00:39:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:37.685 00:39:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:37.685 00:39:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:37.685 00:39:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:37.685 00:39:31 -- bdev/nbd_common.sh@12 -- # local i 00:25:37.685 00:39:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:37.685 00:39:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:37.685 00:39:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:37.944 /dev/nbd0 00:25:37.944 00:39:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:37.944 00:39:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:37.944 00:39:31 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:25:37.944 00:39:31 -- common/autotest_common.sh@855 -- # local i 00:25:37.944 00:39:31 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:37.944 00:39:31 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:37.944 00:39:31 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:25:37.944 00:39:31 -- common/autotest_common.sh@859 -- # break 00:25:37.944 00:39:31 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:37.944 00:39:31 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:37.944 00:39:31 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:37.944 1+0 records in 00:25:37.944 1+0 records out 00:25:37.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439305 s, 9.3 MB/s 00:25:37.944 00:39:31 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:37.944 00:39:31 -- common/autotest_common.sh@872 -- # size=4096 00:25:37.944 00:39:31 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:37.944 00:39:31 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:37.944 00:39:31 -- common/autotest_common.sh@875 -- # return 0 00:25:37.944 00:39:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:37.944 00:39:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:37.944 00:39:31 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:37.944 00:39:31 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:25:37.944 00:39:31 -- bdev/bdev_raid.sh@678 -- # continue 00:25:37.944 00:39:31 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:37.944 00:39:31 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:25:37.944 00:39:31 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:25:37.944 00:39:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:37.944 00:39:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:25:37.944 00:39:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:37.944 00:39:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:37.944 00:39:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:37.944 00:39:31 -- bdev/nbd_common.sh@12 -- # local i 00:25:37.944 00:39:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:37.944 00:39:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:37.944 00:39:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:25:38.203 /dev/nbd1 00:25:38.204 00:39:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:38.204 00:39:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:38.204 00:39:31 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:25:38.204 00:39:31 -- common/autotest_common.sh@855 -- # local i 00:25:38.204 00:39:31 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:38.204 00:39:31 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:38.204 00:39:31 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:25:38.204 00:39:31 -- common/autotest_common.sh@859 -- # break 00:25:38.204 00:39:31 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:38.204 00:39:31 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:38.204 00:39:31 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:38.204 1+0 records in 00:25:38.204 1+0 records out 00:25:38.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485861 s, 8.4 MB/s 00:25:38.204 00:39:31 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:38.462 00:39:31 -- common/autotest_common.sh@872 -- # size=4096 00:25:38.462 00:39:31 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:38.462 00:39:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:38.462 00:39:32 -- common/autotest_common.sh@875 -- # return 0 00:25:38.462 00:39:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:38.462 00:39:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:38.462 00:39:32 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:38.462 00:39:32 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:38.462 00:39:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:38.462 00:39:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:38.462 00:39:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:38.462 00:39:32 -- bdev/nbd_common.sh@51 -- # local i 00:25:38.462 00:39:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:38.462 00:39:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@41 -- # break 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@45 -- # return 0 00:25:38.741 00:39:32 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:38.741 00:39:32 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:25:38.741 00:39:32 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@12 -- # local i 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:38.741 00:39:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:25:39.016 /dev/nbd1 00:25:39.016 00:39:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:39.016 00:39:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:39.016 00:39:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:25:39.016 00:39:32 -- common/autotest_common.sh@855 -- # local i 00:25:39.016 00:39:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:39.016 00:39:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:39.016 00:39:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:25:39.016 00:39:32 -- common/autotest_common.sh@859 -- # break 00:25:39.016 00:39:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:39.016 00:39:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:39.016 00:39:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:39.016 1+0 records in 00:25:39.016 1+0 records out 00:25:39.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384105 s, 10.7 MB/s 00:25:39.016 00:39:32 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:39.016 00:39:32 -- common/autotest_common.sh@872 -- # size=4096 00:25:39.016 00:39:32 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:39.016 00:39:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:39.016 00:39:32 -- common/autotest_common.sh@875 -- # return 0 00:25:39.016 00:39:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:39.016 00:39:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:39.016 00:39:32 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:39.274 00:39:32 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:39.274 00:39:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:39.274 00:39:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:39.274 00:39:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:39.274 00:39:32 -- bdev/nbd_common.sh@51 -- # local i 00:25:39.274 00:39:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:39.274 00:39:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@41 -- # break 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@45 -- # return 0 00:25:39.533 00:39:33 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@51 -- # local i 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:39.533 00:39:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:39.793 00:39:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:39.793 00:39:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:39.793 00:39:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:39.793 00:39:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:39.793 00:39:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:39.793 00:39:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:39.793 00:39:33 -- bdev/nbd_common.sh@41 -- # break 00:25:39.793 00:39:33 -- bdev/nbd_common.sh@45 -- # return 0 00:25:39.793 00:39:33 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:39.793 00:39:33 -- bdev/bdev_raid.sh@709 -- # killprocess 134779 00:25:39.793 00:39:33 -- common/autotest_common.sh@936 -- # '[' -z 134779 ']' 00:25:39.793 00:39:33 -- common/autotest_common.sh@940 -- # kill -0 134779 00:25:39.793 00:39:33 -- common/autotest_common.sh@941 -- # uname 00:25:39.793 00:39:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:39.793 00:39:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134779 00:25:39.793 00:39:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:39.793 00:39:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:39.793 00:39:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134779' 00:25:39.793 killing process with pid 134779 00:25:39.793 00:39:33 -- common/autotest_common.sh@955 -- # kill 134779 00:25:39.793 Received shutdown signal, test time was about 13.521598 seconds 00:25:39.793 00:25:39.793 Latency(us) 00:25:39.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.793 =================================================================================================================== 00:25:39.793 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.793 00:39:33 -- common/autotest_common.sh@960 -- # wait 134779 00:25:39.793 [2024-04-24 00:39:33.496735] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:40.359 [2024-04-24 00:39:33.939738] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:41.732 00:39:35 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:41.732 00:25:41.732 real 0m20.102s 00:25:41.732 user 0m30.304s 00:25:41.732 sys 0m2.820s 00:25:41.732 00:39:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:41.732 00:39:35 -- common/autotest_common.sh@10 -- # set +x 00:25:41.732 ************************************ 00:25:41.732 END TEST raid_rebuild_test_io 00:25:41.732 ************************************ 00:25:41.732 00:39:35 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:25:41.732 00:39:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:25:41.732 00:39:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:41.732 00:39:35 -- common/autotest_common.sh@10 -- # set +x 00:25:42.017 ************************************ 00:25:42.017 START TEST raid_rebuild_test_sb_io 00:25:42.017 ************************************ 00:25:42.017 00:39:35 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 true true 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@544 -- # raid_pid=135307 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135307 /var/tmp/spdk-raid.sock 00:25:42.017 00:39:35 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:42.017 00:39:35 -- common/autotest_common.sh@817 -- # '[' -z 135307 ']' 00:25:42.017 00:39:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:42.017 00:39:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:42.017 00:39:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:42.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:42.017 00:39:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:42.017 00:39:35 -- common/autotest_common.sh@10 -- # set +x 00:25:42.017 [2024-04-24 00:39:35.639908] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:25:42.017 [2024-04-24 00:39:35.640309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135307 ] 00:25:42.017 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:42.017 Zero copy mechanism will not be used. 00:25:42.017 [2024-04-24 00:39:35.803618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.581 [2024-04-24 00:39:36.082918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.581 [2024-04-24 00:39:36.360730] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:43.147 00:39:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:43.147 00:39:36 -- common/autotest_common.sh@850 -- # return 0 00:25:43.147 00:39:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:43.147 00:39:36 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:43.147 00:39:36 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:43.405 BaseBdev1_malloc 00:25:43.405 00:39:37 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:43.663 [2024-04-24 00:39:37.265652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:43.663 [2024-04-24 00:39:37.265903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:43.663 [2024-04-24 00:39:37.266033] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:25:43.663 [2024-04-24 00:39:37.266174] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:43.663 [2024-04-24 00:39:37.268784] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:43.663 [2024-04-24 00:39:37.268968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:43.663 BaseBdev1 00:25:43.663 00:39:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:43.663 00:39:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:43.663 00:39:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:43.921 BaseBdev2_malloc 00:25:43.921 00:39:37 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:44.185 [2024-04-24 00:39:37.832192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:44.185 [2024-04-24 00:39:37.832450] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.185 [2024-04-24 00:39:37.832576] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:44.185 [2024-04-24 00:39:37.832715] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.185 [2024-04-24 00:39:37.835225] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.185 [2024-04-24 00:39:37.835390] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:44.185 BaseBdev2 00:25:44.185 00:39:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:44.185 00:39:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:44.185 00:39:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:44.468 BaseBdev3_malloc 00:25:44.468 00:39:38 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:44.725 [2024-04-24 00:39:38.360606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:44.725 [2024-04-24 00:39:38.360884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.725 [2024-04-24 00:39:38.360963] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:44.725 [2024-04-24 00:39:38.361160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.726 [2024-04-24 00:39:38.363788] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.726 [2024-04-24 00:39:38.363969] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:44.726 BaseBdev3 00:25:44.726 00:39:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:44.726 00:39:38 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:44.726 00:39:38 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:44.983 BaseBdev4_malloc 00:25:44.983 00:39:38 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:45.239 [2024-04-24 00:39:38.909849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:45.240 [2024-04-24 00:39:38.910094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.240 [2024-04-24 00:39:38.910164] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:45.240 [2024-04-24 00:39:38.910281] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.240 [2024-04-24 00:39:38.912834] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.240 [2024-04-24 00:39:38.912993] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:45.240 BaseBdev4 00:25:45.240 00:39:38 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:45.496 spare_malloc 00:25:45.496 00:39:39 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:45.753 spare_delay 00:25:45.753 00:39:39 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:46.010 [2024-04-24 00:39:39.701926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:46.010 [2024-04-24 00:39:39.702177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.010 [2024-04-24 00:39:39.702244] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:46.010 [2024-04-24 00:39:39.702365] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.010 [2024-04-24 00:39:39.704757] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.010 [2024-04-24 00:39:39.704919] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:46.011 spare 00:25:46.011 00:39:39 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:46.268 [2024-04-24 00:39:39.885996] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:46.268 [2024-04-24 00:39:39.888295] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:46.268 [2024-04-24 00:39:39.888496] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:46.268 [2024-04-24 00:39:39.888575] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:46.268 [2024-04-24 00:39:39.888849] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:25:46.268 [2024-04-24 00:39:39.888941] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:46.268 [2024-04-24 00:39:39.889093] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:46.268 [2024-04-24 00:39:39.889489] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:25:46.268 [2024-04-24 00:39:39.889590] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:25:46.268 [2024-04-24 00:39:39.889786] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:46.268 00:39:39 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:46.268 00:39:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:46.268 00:39:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:46.268 00:39:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:46.268 00:39:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:46.268 00:39:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:46.268 00:39:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:46.268 00:39:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:46.268 00:39:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:46.268 00:39:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:46.268 00:39:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.268 00:39:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.525 00:39:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:46.525 "name": "raid_bdev1", 00:25:46.525 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:25:46.525 "strip_size_kb": 0, 00:25:46.525 "state": "online", 00:25:46.525 "raid_level": "raid1", 00:25:46.525 "superblock": true, 00:25:46.525 "num_base_bdevs": 4, 00:25:46.525 "num_base_bdevs_discovered": 4, 00:25:46.525 "num_base_bdevs_operational": 4, 00:25:46.525 "base_bdevs_list": [ 00:25:46.525 { 00:25:46.525 "name": "BaseBdev1", 00:25:46.525 "uuid": "d12cf74c-d87e-544c-9d2e-e39d6bdb12d8", 00:25:46.525 "is_configured": true, 00:25:46.525 "data_offset": 2048, 00:25:46.525 "data_size": 63488 00:25:46.525 }, 00:25:46.525 { 00:25:46.525 "name": "BaseBdev2", 00:25:46.525 "uuid": "5f0f198c-3b52-5318-aaa4-1ead8fbc2035", 00:25:46.525 "is_configured": true, 00:25:46.525 "data_offset": 2048, 00:25:46.525 "data_size": 63488 00:25:46.525 }, 00:25:46.525 { 00:25:46.525 "name": "BaseBdev3", 00:25:46.525 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:25:46.525 "is_configured": true, 00:25:46.525 "data_offset": 2048, 00:25:46.525 "data_size": 63488 00:25:46.525 }, 00:25:46.525 { 00:25:46.525 "name": "BaseBdev4", 00:25:46.525 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:25:46.525 "is_configured": true, 00:25:46.525 "data_offset": 2048, 00:25:46.525 "data_size": 63488 00:25:46.525 } 00:25:46.525 ] 00:25:46.525 }' 00:25:46.525 00:39:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:46.525 00:39:40 -- common/autotest_common.sh@10 -- # set +x 00:25:47.091 00:39:40 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:47.091 00:39:40 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:47.347 [2024-04-24 00:39:41.030493] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:47.347 00:39:41 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:25:47.347 00:39:41 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:47.347 00:39:41 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.604 00:39:41 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:25:47.604 00:39:41 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:25:47.604 00:39:41 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:47.604 00:39:41 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:47.604 [2024-04-24 00:39:41.369373] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:47.604 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:47.604 Zero copy mechanism will not be used. 00:25:47.604 Running I/O for 60 seconds... 00:25:47.860 [2024-04-24 00:39:41.413034] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:47.860 [2024-04-24 00:39:41.419033] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:25:47.860 00:39:41 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:47.860 00:39:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:47.860 00:39:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:47.860 00:39:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:47.860 00:39:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:47.860 00:39:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:47.860 00:39:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:47.860 00:39:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:47.860 00:39:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:47.860 00:39:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:47.861 00:39:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.861 00:39:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.117 00:39:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:48.117 "name": "raid_bdev1", 00:25:48.117 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:25:48.117 "strip_size_kb": 0, 00:25:48.117 "state": "online", 00:25:48.117 "raid_level": "raid1", 00:25:48.117 "superblock": true, 00:25:48.117 "num_base_bdevs": 4, 00:25:48.117 "num_base_bdevs_discovered": 3, 00:25:48.117 "num_base_bdevs_operational": 3, 00:25:48.117 "base_bdevs_list": [ 00:25:48.117 { 00:25:48.117 "name": null, 00:25:48.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.117 "is_configured": false, 00:25:48.117 "data_offset": 2048, 00:25:48.117 "data_size": 63488 00:25:48.117 }, 00:25:48.117 { 00:25:48.117 "name": "BaseBdev2", 00:25:48.117 "uuid": "5f0f198c-3b52-5318-aaa4-1ead8fbc2035", 00:25:48.117 "is_configured": true, 00:25:48.117 "data_offset": 2048, 00:25:48.117 "data_size": 63488 00:25:48.117 }, 00:25:48.117 { 00:25:48.117 "name": "BaseBdev3", 00:25:48.117 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:25:48.117 "is_configured": true, 00:25:48.117 "data_offset": 2048, 00:25:48.117 "data_size": 63488 00:25:48.117 }, 00:25:48.117 { 00:25:48.117 "name": "BaseBdev4", 00:25:48.117 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:25:48.117 "is_configured": true, 00:25:48.117 "data_offset": 2048, 00:25:48.117 "data_size": 63488 00:25:48.117 } 00:25:48.117 ] 00:25:48.117 }' 00:25:48.117 00:39:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:48.117 00:39:41 -- common/autotest_common.sh@10 -- # set +x 00:25:48.683 00:39:42 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:48.968 [2024-04-24 00:39:42.581638] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:48.968 [2024-04-24 00:39:42.581865] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:48.968 00:39:42 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:48.968 [2024-04-24 00:39:42.638287] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:48.968 [2024-04-24 00:39:42.640849] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:49.224 [2024-04-24 00:39:42.782767] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:49.224 [2024-04-24 00:39:42.784318] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:49.481 [2024-04-24 00:39:43.039096] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:49.481 [2024-04-24 00:39:43.039993] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:49.738 [2024-04-24 00:39:43.382651] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:49.738 [2024-04-24 00:39:43.383405] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:49.738 [2024-04-24 00:39:43.507031] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:49.995 00:39:43 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:49.995 00:39:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:49.995 00:39:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:49.995 00:39:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:49.995 00:39:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:49.995 00:39:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.995 00:39:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.252 [2024-04-24 00:39:43.856915] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:50.252 00:39:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:50.252 "name": "raid_bdev1", 00:25:50.252 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:25:50.252 "strip_size_kb": 0, 00:25:50.252 "state": "online", 00:25:50.252 "raid_level": "raid1", 00:25:50.252 "superblock": true, 00:25:50.252 "num_base_bdevs": 4, 00:25:50.252 "num_base_bdevs_discovered": 4, 00:25:50.252 "num_base_bdevs_operational": 4, 00:25:50.252 "process": { 00:25:50.252 "type": "rebuild", 00:25:50.252 "target": "spare", 00:25:50.252 "progress": { 00:25:50.252 "blocks": 14336, 00:25:50.252 "percent": 22 00:25:50.252 } 00:25:50.252 }, 00:25:50.252 "base_bdevs_list": [ 00:25:50.252 { 00:25:50.252 "name": "spare", 00:25:50.252 "uuid": "dcc1feaf-dab5-5458-a3fc-f69d3ea2dc83", 00:25:50.252 "is_configured": true, 00:25:50.252 "data_offset": 2048, 00:25:50.252 "data_size": 63488 00:25:50.252 }, 00:25:50.252 { 00:25:50.252 "name": "BaseBdev2", 00:25:50.252 "uuid": "5f0f198c-3b52-5318-aaa4-1ead8fbc2035", 00:25:50.252 "is_configured": true, 00:25:50.252 "data_offset": 2048, 00:25:50.252 "data_size": 63488 00:25:50.252 }, 00:25:50.252 { 00:25:50.252 "name": "BaseBdev3", 00:25:50.252 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:25:50.252 "is_configured": true, 00:25:50.252 "data_offset": 2048, 00:25:50.252 "data_size": 63488 00:25:50.252 }, 00:25:50.252 { 00:25:50.252 "name": "BaseBdev4", 00:25:50.252 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:25:50.252 "is_configured": true, 00:25:50.252 "data_offset": 2048, 00:25:50.252 "data_size": 63488 00:25:50.252 } 00:25:50.252 ] 00:25:50.252 }' 00:25:50.252 00:39:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:50.252 00:39:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:50.252 00:39:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:50.252 00:39:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:50.252 00:39:43 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:50.509 [2024-04-24 00:39:44.238979] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:50.767 [2024-04-24 00:39:44.324702] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:50.767 [2024-04-24 00:39:44.326235] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:50.767 [2024-04-24 00:39:44.428008] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:50.767 [2024-04-24 00:39:44.439166] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:50.767 [2024-04-24 00:39:44.477958] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:25:50.767 00:39:44 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:50.767 00:39:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:50.767 00:39:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:50.767 00:39:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:50.767 00:39:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:50.767 00:39:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:50.767 00:39:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:50.767 00:39:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:50.767 00:39:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:50.767 00:39:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:50.767 00:39:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.767 00:39:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.024 00:39:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:51.024 "name": "raid_bdev1", 00:25:51.024 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:25:51.024 "strip_size_kb": 0, 00:25:51.024 "state": "online", 00:25:51.024 "raid_level": "raid1", 00:25:51.024 "superblock": true, 00:25:51.024 "num_base_bdevs": 4, 00:25:51.024 "num_base_bdevs_discovered": 3, 00:25:51.024 "num_base_bdevs_operational": 3, 00:25:51.024 "base_bdevs_list": [ 00:25:51.024 { 00:25:51.024 "name": null, 00:25:51.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.024 "is_configured": false, 00:25:51.024 "data_offset": 2048, 00:25:51.024 "data_size": 63488 00:25:51.024 }, 00:25:51.024 { 00:25:51.024 "name": "BaseBdev2", 00:25:51.024 "uuid": "5f0f198c-3b52-5318-aaa4-1ead8fbc2035", 00:25:51.024 "is_configured": true, 00:25:51.024 "data_offset": 2048, 00:25:51.024 "data_size": 63488 00:25:51.024 }, 00:25:51.024 { 00:25:51.024 "name": "BaseBdev3", 00:25:51.024 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:25:51.024 "is_configured": true, 00:25:51.024 "data_offset": 2048, 00:25:51.024 "data_size": 63488 00:25:51.024 }, 00:25:51.024 { 00:25:51.024 "name": "BaseBdev4", 00:25:51.024 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:25:51.024 "is_configured": true, 00:25:51.024 "data_offset": 2048, 00:25:51.024 "data_size": 63488 00:25:51.024 } 00:25:51.024 ] 00:25:51.024 }' 00:25:51.024 00:39:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:51.024 00:39:44 -- common/autotest_common.sh@10 -- # set +x 00:25:51.613 00:39:45 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:51.613 00:39:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:51.613 00:39:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:51.613 00:39:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:51.613 00:39:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:51.614 00:39:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.614 00:39:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.878 00:39:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:51.878 "name": "raid_bdev1", 00:25:51.878 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:25:51.878 "strip_size_kb": 0, 00:25:51.878 "state": "online", 00:25:51.878 "raid_level": "raid1", 00:25:51.878 "superblock": true, 00:25:51.878 "num_base_bdevs": 4, 00:25:51.878 "num_base_bdevs_discovered": 3, 00:25:51.878 "num_base_bdevs_operational": 3, 00:25:51.878 "base_bdevs_list": [ 00:25:51.878 { 00:25:51.878 "name": null, 00:25:51.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.878 "is_configured": false, 00:25:51.878 "data_offset": 2048, 00:25:51.878 "data_size": 63488 00:25:51.878 }, 00:25:51.878 { 00:25:51.878 "name": "BaseBdev2", 00:25:51.878 "uuid": "5f0f198c-3b52-5318-aaa4-1ead8fbc2035", 00:25:51.878 "is_configured": true, 00:25:51.878 "data_offset": 2048, 00:25:51.878 "data_size": 63488 00:25:51.878 }, 00:25:51.878 { 00:25:51.878 "name": "BaseBdev3", 00:25:51.878 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:25:51.878 "is_configured": true, 00:25:51.878 "data_offset": 2048, 00:25:51.878 "data_size": 63488 00:25:51.878 }, 00:25:51.878 { 00:25:51.878 "name": "BaseBdev4", 00:25:51.878 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:25:51.878 "is_configured": true, 00:25:51.878 "data_offset": 2048, 00:25:51.878 "data_size": 63488 00:25:51.878 } 00:25:51.878 ] 00:25:51.878 }' 00:25:51.878 00:39:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:51.878 00:39:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:51.878 00:39:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:52.136 00:39:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:52.136 00:39:45 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:52.406 [2024-04-24 00:39:45.969385] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:52.406 [2024-04-24 00:39:45.969620] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:52.406 00:39:46 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:52.406 [2024-04-24 00:39:46.030571] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:25:52.406 [2024-04-24 00:39:46.033003] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:52.406 [2024-04-24 00:39:46.159041] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:52.406 [2024-04-24 00:39:46.160652] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:52.670 [2024-04-24 00:39:46.388005] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:52.670 [2024-04-24 00:39:46.388450] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:53.236 [2024-04-24 00:39:46.749251] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:53.236 [2024-04-24 00:39:46.991295] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:53.236 00:39:47 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:53.236 00:39:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:53.236 00:39:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:53.236 00:39:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:53.236 00:39:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:53.236 00:39:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.236 00:39:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.493 00:39:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:53.493 "name": "raid_bdev1", 00:25:53.493 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:25:53.493 "strip_size_kb": 0, 00:25:53.493 "state": "online", 00:25:53.493 "raid_level": "raid1", 00:25:53.493 "superblock": true, 00:25:53.493 "num_base_bdevs": 4, 00:25:53.493 "num_base_bdevs_discovered": 4, 00:25:53.493 "num_base_bdevs_operational": 4, 00:25:53.493 "process": { 00:25:53.493 "type": "rebuild", 00:25:53.493 "target": "spare", 00:25:53.493 "progress": { 00:25:53.493 "blocks": 12288, 00:25:53.493 "percent": 19 00:25:53.493 } 00:25:53.493 }, 00:25:53.493 "base_bdevs_list": [ 00:25:53.493 { 00:25:53.493 "name": "spare", 00:25:53.493 "uuid": "dcc1feaf-dab5-5458-a3fc-f69d3ea2dc83", 00:25:53.493 "is_configured": true, 00:25:53.493 "data_offset": 2048, 00:25:53.493 "data_size": 63488 00:25:53.493 }, 00:25:53.493 { 00:25:53.493 "name": "BaseBdev2", 00:25:53.493 "uuid": "5f0f198c-3b52-5318-aaa4-1ead8fbc2035", 00:25:53.493 "is_configured": true, 00:25:53.493 "data_offset": 2048, 00:25:53.493 "data_size": 63488 00:25:53.493 }, 00:25:53.493 { 00:25:53.493 "name": "BaseBdev3", 00:25:53.493 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:25:53.493 "is_configured": true, 00:25:53.493 "data_offset": 2048, 00:25:53.493 "data_size": 63488 00:25:53.493 }, 00:25:53.493 { 00:25:53.493 "name": "BaseBdev4", 00:25:53.493 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:25:53.493 "is_configured": true, 00:25:53.493 "data_offset": 2048, 00:25:53.493 "data_size": 63488 00:25:53.493 } 00:25:53.493 ] 00:25:53.493 }' 00:25:53.493 00:39:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:53.751 00:39:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:53.751 00:39:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:53.751 [2024-04-24 00:39:47.353775] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:53.751 00:39:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:53.751 00:39:47 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:53.751 00:39:47 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:53.751 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:53.751 00:39:47 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:53.751 00:39:47 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:25:53.751 00:39:47 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:25:53.751 00:39:47 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:54.009 [2024-04-24 00:39:47.581135] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:54.009 [2024-04-24 00:39:47.604064] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:54.267 [2024-04-24 00:39:47.898873] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ee0 00:25:54.267 [2024-04-24 00:39:47.899139] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006150 00:25:54.267 00:39:48 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:25:54.267 00:39:48 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:25:54.267 00:39:48 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:54.267 00:39:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:54.267 00:39:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:54.267 00:39:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:54.267 00:39:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:54.267 00:39:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.267 00:39:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.267 [2024-04-24 00:39:48.053594] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:54.525 00:39:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:54.525 "name": "raid_bdev1", 00:25:54.525 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:25:54.525 "strip_size_kb": 0, 00:25:54.525 "state": "online", 00:25:54.525 "raid_level": "raid1", 00:25:54.525 "superblock": true, 00:25:54.525 "num_base_bdevs": 4, 00:25:54.525 "num_base_bdevs_discovered": 3, 00:25:54.525 "num_base_bdevs_operational": 3, 00:25:54.525 "process": { 00:25:54.525 "type": "rebuild", 00:25:54.525 "target": "spare", 00:25:54.525 "progress": { 00:25:54.525 "blocks": 20480, 00:25:54.525 "percent": 32 00:25:54.525 } 00:25:54.525 }, 00:25:54.525 "base_bdevs_list": [ 00:25:54.525 { 00:25:54.525 "name": "spare", 00:25:54.525 "uuid": "dcc1feaf-dab5-5458-a3fc-f69d3ea2dc83", 00:25:54.525 "is_configured": true, 00:25:54.525 "data_offset": 2048, 00:25:54.525 "data_size": 63488 00:25:54.525 }, 00:25:54.525 { 00:25:54.525 "name": null, 00:25:54.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.525 "is_configured": false, 00:25:54.525 "data_offset": 2048, 00:25:54.525 "data_size": 63488 00:25:54.525 }, 00:25:54.525 { 00:25:54.525 "name": "BaseBdev3", 00:25:54.525 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:25:54.525 "is_configured": true, 00:25:54.525 "data_offset": 2048, 00:25:54.525 "data_size": 63488 00:25:54.525 }, 00:25:54.525 { 00:25:54.525 "name": "BaseBdev4", 00:25:54.525 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:25:54.525 "is_configured": true, 00:25:54.525 "data_offset": 2048, 00:25:54.525 "data_size": 63488 00:25:54.525 } 00:25:54.525 ] 00:25:54.525 }' 00:25:54.525 00:39:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:54.525 00:39:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:54.525 00:39:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:54.525 [2024-04-24 00:39:48.284950] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@657 -- # local timeout=611 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:54.783 "name": "raid_bdev1", 00:25:54.783 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:25:54.783 "strip_size_kb": 0, 00:25:54.783 "state": "online", 00:25:54.783 "raid_level": "raid1", 00:25:54.783 "superblock": true, 00:25:54.783 "num_base_bdevs": 4, 00:25:54.783 "num_base_bdevs_discovered": 3, 00:25:54.783 "num_base_bdevs_operational": 3, 00:25:54.783 "process": { 00:25:54.783 "type": "rebuild", 00:25:54.783 "target": "spare", 00:25:54.783 "progress": { 00:25:54.783 "blocks": 24576, 00:25:54.783 "percent": 38 00:25:54.783 } 00:25:54.783 }, 00:25:54.783 "base_bdevs_list": [ 00:25:54.783 { 00:25:54.783 "name": "spare", 00:25:54.783 "uuid": "dcc1feaf-dab5-5458-a3fc-f69d3ea2dc83", 00:25:54.783 "is_configured": true, 00:25:54.783 "data_offset": 2048, 00:25:54.783 "data_size": 63488 00:25:54.783 }, 00:25:54.783 { 00:25:54.783 "name": null, 00:25:54.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.783 "is_configured": false, 00:25:54.783 "data_offset": 2048, 00:25:54.783 "data_size": 63488 00:25:54.783 }, 00:25:54.783 { 00:25:54.783 "name": "BaseBdev3", 00:25:54.783 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:25:54.783 "is_configured": true, 00:25:54.783 "data_offset": 2048, 00:25:54.783 "data_size": 63488 00:25:54.783 }, 00:25:54.783 { 00:25:54.783 "name": "BaseBdev4", 00:25:54.783 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:25:54.783 "is_configured": true, 00:25:54.783 "data_offset": 2048, 00:25:54.783 "data_size": 63488 00:25:54.783 } 00:25:54.783 ] 00:25:54.783 }' 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:54.783 [2024-04-24 00:39:48.525013] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:54.783 00:39:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:55.041 00:39:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:55.041 00:39:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:55.298 [2024-04-24 00:39:48.882182] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:55.298 [2024-04-24 00:39:48.883389] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:55.555 [2024-04-24 00:39:49.101874] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:55.812 [2024-04-24 00:39:49.545607] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:25:56.069 00:39:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:56.069 00:39:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:56.069 00:39:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:56.069 00:39:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:56.069 00:39:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:56.069 00:39:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:56.069 00:39:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.069 00:39:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.326 00:39:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:56.326 "name": "raid_bdev1", 00:25:56.326 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:25:56.326 "strip_size_kb": 0, 00:25:56.326 "state": "online", 00:25:56.326 "raid_level": "raid1", 00:25:56.326 "superblock": true, 00:25:56.326 "num_base_bdevs": 4, 00:25:56.326 "num_base_bdevs_discovered": 3, 00:25:56.326 "num_base_bdevs_operational": 3, 00:25:56.326 "process": { 00:25:56.326 "type": "rebuild", 00:25:56.326 "target": "spare", 00:25:56.326 "progress": { 00:25:56.326 "blocks": 43008, 00:25:56.326 "percent": 67 00:25:56.326 } 00:25:56.326 }, 00:25:56.326 "base_bdevs_list": [ 00:25:56.326 { 00:25:56.326 "name": "spare", 00:25:56.326 "uuid": "dcc1feaf-dab5-5458-a3fc-f69d3ea2dc83", 00:25:56.326 "is_configured": true, 00:25:56.326 "data_offset": 2048, 00:25:56.326 "data_size": 63488 00:25:56.326 }, 00:25:56.327 { 00:25:56.327 "name": null, 00:25:56.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.327 "is_configured": false, 00:25:56.327 "data_offset": 2048, 00:25:56.327 "data_size": 63488 00:25:56.327 }, 00:25:56.327 { 00:25:56.327 "name": "BaseBdev3", 00:25:56.327 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:25:56.327 "is_configured": true, 00:25:56.327 "data_offset": 2048, 00:25:56.327 "data_size": 63488 00:25:56.327 }, 00:25:56.327 { 00:25:56.327 "name": "BaseBdev4", 00:25:56.327 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:25:56.327 "is_configured": true, 00:25:56.327 "data_offset": 2048, 00:25:56.327 "data_size": 63488 00:25:56.327 } 00:25:56.327 ] 00:25:56.327 }' 00:25:56.327 00:39:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:56.327 00:39:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:56.327 00:39:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:56.327 00:39:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:56.327 00:39:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:56.583 [2024-04-24 00:39:50.232566] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:25:56.893 [2024-04-24 00:39:50.450649] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:25:57.165 [2024-04-24 00:39:50.786759] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:25:57.165 [2024-04-24 00:39:50.895680] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:25:57.423 00:39:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:57.423 00:39:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:57.423 00:39:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:57.423 00:39:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:57.423 00:39:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:57.423 00:39:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:57.423 00:39:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.423 00:39:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.423 [2024-04-24 00:39:51.116744] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:57.423 [2024-04-24 00:39:51.214278] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:57.681 [2024-04-24 00:39:51.223921] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:57.681 00:39:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:57.681 "name": "raid_bdev1", 00:25:57.681 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:25:57.681 "strip_size_kb": 0, 00:25:57.681 "state": "online", 00:25:57.681 "raid_level": "raid1", 00:25:57.681 "superblock": true, 00:25:57.681 "num_base_bdevs": 4, 00:25:57.681 "num_base_bdevs_discovered": 3, 00:25:57.681 "num_base_bdevs_operational": 3, 00:25:57.681 "base_bdevs_list": [ 00:25:57.681 { 00:25:57.681 "name": "spare", 00:25:57.681 "uuid": "dcc1feaf-dab5-5458-a3fc-f69d3ea2dc83", 00:25:57.681 "is_configured": true, 00:25:57.681 "data_offset": 2048, 00:25:57.681 "data_size": 63488 00:25:57.681 }, 00:25:57.681 { 00:25:57.681 "name": null, 00:25:57.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.681 "is_configured": false, 00:25:57.681 "data_offset": 2048, 00:25:57.681 "data_size": 63488 00:25:57.681 }, 00:25:57.681 { 00:25:57.681 "name": "BaseBdev3", 00:25:57.681 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:25:57.681 "is_configured": true, 00:25:57.681 "data_offset": 2048, 00:25:57.681 "data_size": 63488 00:25:57.681 }, 00:25:57.681 { 00:25:57.681 "name": "BaseBdev4", 00:25:57.681 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:25:57.681 "is_configured": true, 00:25:57.681 "data_offset": 2048, 00:25:57.681 "data_size": 63488 00:25:57.681 } 00:25:57.681 ] 00:25:57.681 }' 00:25:57.681 00:39:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:57.681 00:39:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:57.681 00:39:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:57.681 00:39:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:57.681 00:39:51 -- bdev/bdev_raid.sh@660 -- # break 00:25:57.681 00:39:51 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:57.681 00:39:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:57.681 00:39:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:57.681 00:39:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:57.681 00:39:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:57.681 00:39:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.681 00:39:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:57.939 "name": "raid_bdev1", 00:25:57.939 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:25:57.939 "strip_size_kb": 0, 00:25:57.939 "state": "online", 00:25:57.939 "raid_level": "raid1", 00:25:57.939 "superblock": true, 00:25:57.939 "num_base_bdevs": 4, 00:25:57.939 "num_base_bdevs_discovered": 3, 00:25:57.939 "num_base_bdevs_operational": 3, 00:25:57.939 "base_bdevs_list": [ 00:25:57.939 { 00:25:57.939 "name": "spare", 00:25:57.939 "uuid": "dcc1feaf-dab5-5458-a3fc-f69d3ea2dc83", 00:25:57.939 "is_configured": true, 00:25:57.939 "data_offset": 2048, 00:25:57.939 "data_size": 63488 00:25:57.939 }, 00:25:57.939 { 00:25:57.939 "name": null, 00:25:57.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.939 "is_configured": false, 00:25:57.939 "data_offset": 2048, 00:25:57.939 "data_size": 63488 00:25:57.939 }, 00:25:57.939 { 00:25:57.939 "name": "BaseBdev3", 00:25:57.939 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:25:57.939 "is_configured": true, 00:25:57.939 "data_offset": 2048, 00:25:57.939 "data_size": 63488 00:25:57.939 }, 00:25:57.939 { 00:25:57.939 "name": "BaseBdev4", 00:25:57.939 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:25:57.939 "is_configured": true, 00:25:57.939 "data_offset": 2048, 00:25:57.939 "data_size": 63488 00:25:57.939 } 00:25:57.939 ] 00:25:57.939 }' 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.939 00:39:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.197 00:39:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:58.197 "name": "raid_bdev1", 00:25:58.197 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:25:58.197 "strip_size_kb": 0, 00:25:58.197 "state": "online", 00:25:58.197 "raid_level": "raid1", 00:25:58.197 "superblock": true, 00:25:58.197 "num_base_bdevs": 4, 00:25:58.197 "num_base_bdevs_discovered": 3, 00:25:58.197 "num_base_bdevs_operational": 3, 00:25:58.197 "base_bdevs_list": [ 00:25:58.197 { 00:25:58.197 "name": "spare", 00:25:58.197 "uuid": "dcc1feaf-dab5-5458-a3fc-f69d3ea2dc83", 00:25:58.197 "is_configured": true, 00:25:58.197 "data_offset": 2048, 00:25:58.197 "data_size": 63488 00:25:58.197 }, 00:25:58.197 { 00:25:58.197 "name": null, 00:25:58.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.197 "is_configured": false, 00:25:58.197 "data_offset": 2048, 00:25:58.197 "data_size": 63488 00:25:58.197 }, 00:25:58.197 { 00:25:58.197 "name": "BaseBdev3", 00:25:58.197 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:25:58.197 "is_configured": true, 00:25:58.197 "data_offset": 2048, 00:25:58.197 "data_size": 63488 00:25:58.197 }, 00:25:58.197 { 00:25:58.197 "name": "BaseBdev4", 00:25:58.197 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:25:58.197 "is_configured": true, 00:25:58.197 "data_offset": 2048, 00:25:58.197 "data_size": 63488 00:25:58.197 } 00:25:58.197 ] 00:25:58.197 }' 00:25:58.197 00:39:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:58.197 00:39:51 -- common/autotest_common.sh@10 -- # set +x 00:25:59.129 00:39:52 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:59.129 [2024-04-24 00:39:52.883435] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:59.129 [2024-04-24 00:39:52.883645] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:59.386 00:25:59.386 Latency(us) 00:25:59.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.386 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:59.386 raid_bdev1 : 11.60 103.02 309.07 0.00 0.00 13054.87 409.60 118838.61 00:25:59.386 =================================================================================================================== 00:25:59.386 Total : 103.02 309.07 0.00 0.00 13054.87 409.60 118838.61 00:25:59.386 0 00:25:59.386 [2024-04-24 00:39:53.001593] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.386 [2024-04-24 00:39:53.001761] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:59.386 [2024-04-24 00:39:53.001887] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:59.386 [2024-04-24 00:39:53.001969] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:25:59.386 00:39:53 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.386 00:39:53 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:59.643 00:39:53 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:59.643 00:39:53 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:25:59.643 00:39:53 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:59.643 00:39:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:59.643 00:39:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:59.643 00:39:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:59.643 00:39:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:59.643 00:39:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:59.643 00:39:53 -- bdev/nbd_common.sh@12 -- # local i 00:25:59.643 00:39:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:59.643 00:39:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:59.644 00:39:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:59.901 /dev/nbd0 00:25:59.901 00:39:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:59.901 00:39:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:59.901 00:39:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:25:59.901 00:39:53 -- common/autotest_common.sh@855 -- # local i 00:25:59.901 00:39:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:59.901 00:39:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:59.901 00:39:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:25:59.901 00:39:53 -- common/autotest_common.sh@859 -- # break 00:25:59.901 00:39:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:59.901 00:39:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:59.901 00:39:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:59.901 1+0 records in 00:25:59.901 1+0 records out 00:25:59.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632949 s, 6.5 MB/s 00:25:59.901 00:39:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:59.901 00:39:53 -- common/autotest_common.sh@872 -- # size=4096 00:25:59.901 00:39:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:59.901 00:39:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:59.901 00:39:53 -- common/autotest_common.sh@875 -- # return 0 00:25:59.901 00:39:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:59.901 00:39:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:59.901 00:39:53 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:59.901 00:39:53 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:25:59.901 00:39:53 -- bdev/bdev_raid.sh@678 -- # continue 00:25:59.901 00:39:53 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:25:59.901 00:39:53 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:25:59.901 00:39:53 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:25:59.901 00:39:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:59.901 00:39:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:25:59.901 00:39:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:59.901 00:39:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:59.901 00:39:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:59.901 00:39:53 -- bdev/nbd_common.sh@12 -- # local i 00:25:59.901 00:39:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:59.901 00:39:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:59.901 00:39:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:26:00.159 /dev/nbd1 00:26:00.159 00:39:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:00.159 00:39:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:00.159 00:39:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:26:00.159 00:39:53 -- common/autotest_common.sh@855 -- # local i 00:26:00.159 00:39:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:26:00.159 00:39:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:26:00.159 00:39:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:26:00.159 00:39:53 -- common/autotest_common.sh@859 -- # break 00:26:00.159 00:39:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:26:00.159 00:39:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:26:00.159 00:39:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:00.159 1+0 records in 00:26:00.159 1+0 records out 00:26:00.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486523 s, 8.4 MB/s 00:26:00.159 00:39:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:00.159 00:39:53 -- common/autotest_common.sh@872 -- # size=4096 00:26:00.159 00:39:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:00.159 00:39:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:26:00.159 00:39:53 -- common/autotest_common.sh@875 -- # return 0 00:26:00.159 00:39:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:00.159 00:39:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:00.159 00:39:53 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:00.416 00:39:54 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:00.416 00:39:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:00.416 00:39:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:00.416 00:39:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:00.416 00:39:54 -- bdev/nbd_common.sh@51 -- # local i 00:26:00.416 00:39:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:00.416 00:39:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@41 -- # break 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@45 -- # return 0 00:26:00.674 00:39:54 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:26:00.674 00:39:54 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:26:00.674 00:39:54 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@12 -- # local i 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:00.674 00:39:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:26:00.932 /dev/nbd1 00:26:00.932 00:39:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:00.932 00:39:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:00.932 00:39:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:26:00.932 00:39:54 -- common/autotest_common.sh@855 -- # local i 00:26:00.932 00:39:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:26:00.932 00:39:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:26:00.932 00:39:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:26:00.932 00:39:54 -- common/autotest_common.sh@859 -- # break 00:26:00.932 00:39:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:26:00.932 00:39:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:26:00.932 00:39:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:01.190 1+0 records in 00:26:01.190 1+0 records out 00:26:01.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651581 s, 6.3 MB/s 00:26:01.190 00:39:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:01.190 00:39:54 -- common/autotest_common.sh@872 -- # size=4096 00:26:01.190 00:39:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:01.190 00:39:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:26:01.190 00:39:54 -- common/autotest_common.sh@875 -- # return 0 00:26:01.190 00:39:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:01.190 00:39:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:01.190 00:39:54 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:01.190 00:39:54 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:26:01.190 00:39:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:01.190 00:39:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:26:01.190 00:39:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:01.190 00:39:54 -- bdev/nbd_common.sh@51 -- # local i 00:26:01.190 00:39:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:01.190 00:39:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@41 -- # break 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@45 -- # return 0 00:26:01.448 00:39:55 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@51 -- # local i 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:01.448 00:39:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:01.706 00:39:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:01.706 00:39:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:01.706 00:39:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:01.706 00:39:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:01.706 00:39:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:01.706 00:39:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:01.706 00:39:55 -- bdev/nbd_common.sh@41 -- # break 00:26:01.706 00:39:55 -- bdev/nbd_common.sh@45 -- # return 0 00:26:01.706 00:39:55 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:26:01.706 00:39:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:01.706 00:39:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:26:01.706 00:39:55 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:01.963 00:39:55 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:02.220 [2024-04-24 00:39:55.873412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:02.221 [2024-04-24 00:39:55.873675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.221 [2024-04-24 00:39:55.873758] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:02.221 [2024-04-24 00:39:55.873884] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.221 [2024-04-24 00:39:55.877078] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.221 [2024-04-24 00:39:55.877281] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:02.221 [2024-04-24 00:39:55.877522] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:02.221 [2024-04-24 00:39:55.877679] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:02.221 BaseBdev1 00:26:02.221 00:39:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:02.221 00:39:55 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:26:02.221 00:39:55 -- bdev/bdev_raid.sh@696 -- # continue 00:26:02.221 00:39:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:02.221 00:39:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:26:02.221 00:39:55 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:26:02.478 00:39:56 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:02.735 [2024-04-24 00:39:56.373731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:02.735 [2024-04-24 00:39:56.374021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.735 [2024-04-24 00:39:56.374100] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:26:02.735 [2024-04-24 00:39:56.374204] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.735 [2024-04-24 00:39:56.374734] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.735 [2024-04-24 00:39:56.374908] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:02.735 [2024-04-24 00:39:56.375116] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:26:02.735 [2024-04-24 00:39:56.375207] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:26:02.735 [2024-04-24 00:39:56.375283] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:02.735 [2024-04-24 00:39:56.375339] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:26:02.735 [2024-04-24 00:39:56.375430] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:02.735 BaseBdev3 00:26:02.735 00:39:56 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:02.735 00:39:56 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:26:02.735 00:39:56 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:26:02.993 00:39:56 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:03.251 [2024-04-24 00:39:56.801867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:03.251 [2024-04-24 00:39:56.802187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.251 [2024-04-24 00:39:56.802259] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:03.251 [2024-04-24 00:39:56.802358] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.251 [2024-04-24 00:39:56.803055] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.251 [2024-04-24 00:39:56.803231] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:03.251 [2024-04-24 00:39:56.803489] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:26:03.251 [2024-04-24 00:39:56.803609] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:03.251 BaseBdev4 00:26:03.251 00:39:56 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:03.510 00:39:57 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:03.769 [2024-04-24 00:39:57.330041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:03.769 [2024-04-24 00:39:57.330357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.769 [2024-04-24 00:39:57.330426] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:03.769 [2024-04-24 00:39:57.330540] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.769 [2024-04-24 00:39:57.331102] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.769 [2024-04-24 00:39:57.331259] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:03.769 [2024-04-24 00:39:57.331430] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:26:03.769 [2024-04-24 00:39:57.331480] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:03.769 spare 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.769 [2024-04-24 00:39:57.431624] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:26:03.769 [2024-04-24 00:39:57.431819] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:03.769 [2024-04-24 00:39:57.432015] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000373d0 00:26:03.769 [2024-04-24 00:39:57.432567] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:26:03.769 [2024-04-24 00:39:57.432666] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:26:03.769 [2024-04-24 00:39:57.432901] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:03.769 "name": "raid_bdev1", 00:26:03.769 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:26:03.769 "strip_size_kb": 0, 00:26:03.769 "state": "online", 00:26:03.769 "raid_level": "raid1", 00:26:03.769 "superblock": true, 00:26:03.769 "num_base_bdevs": 4, 00:26:03.769 "num_base_bdevs_discovered": 3, 00:26:03.769 "num_base_bdevs_operational": 3, 00:26:03.769 "base_bdevs_list": [ 00:26:03.769 { 00:26:03.769 "name": "spare", 00:26:03.769 "uuid": "dcc1feaf-dab5-5458-a3fc-f69d3ea2dc83", 00:26:03.769 "is_configured": true, 00:26:03.769 "data_offset": 2048, 00:26:03.769 "data_size": 63488 00:26:03.769 }, 00:26:03.769 { 00:26:03.769 "name": null, 00:26:03.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.769 "is_configured": false, 00:26:03.769 "data_offset": 2048, 00:26:03.769 "data_size": 63488 00:26:03.769 }, 00:26:03.769 { 00:26:03.769 "name": "BaseBdev3", 00:26:03.769 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:26:03.769 "is_configured": true, 00:26:03.769 "data_offset": 2048, 00:26:03.769 "data_size": 63488 00:26:03.769 }, 00:26:03.769 { 00:26:03.769 "name": "BaseBdev4", 00:26:03.769 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:26:03.769 "is_configured": true, 00:26:03.769 "data_offset": 2048, 00:26:03.769 "data_size": 63488 00:26:03.769 } 00:26:03.769 ] 00:26:03.769 }' 00:26:03.769 00:39:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:03.769 00:39:57 -- common/autotest_common.sh@10 -- # set +x 00:26:04.335 00:39:58 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:04.335 00:39:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:04.335 00:39:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:04.335 00:39:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:04.335 00:39:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:04.335 00:39:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.335 00:39:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.592 00:39:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:04.592 "name": "raid_bdev1", 00:26:04.592 "uuid": "8c857353-9fd9-46c0-b446-ee3b6d6f635c", 00:26:04.592 "strip_size_kb": 0, 00:26:04.592 "state": "online", 00:26:04.592 "raid_level": "raid1", 00:26:04.592 "superblock": true, 00:26:04.592 "num_base_bdevs": 4, 00:26:04.592 "num_base_bdevs_discovered": 3, 00:26:04.592 "num_base_bdevs_operational": 3, 00:26:04.592 "base_bdevs_list": [ 00:26:04.592 { 00:26:04.592 "name": "spare", 00:26:04.592 "uuid": "dcc1feaf-dab5-5458-a3fc-f69d3ea2dc83", 00:26:04.592 "is_configured": true, 00:26:04.592 "data_offset": 2048, 00:26:04.592 "data_size": 63488 00:26:04.592 }, 00:26:04.592 { 00:26:04.592 "name": null, 00:26:04.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.592 "is_configured": false, 00:26:04.592 "data_offset": 2048, 00:26:04.592 "data_size": 63488 00:26:04.592 }, 00:26:04.592 { 00:26:04.592 "name": "BaseBdev3", 00:26:04.592 "uuid": "a19ec6d0-c557-5d82-ad18-b74ba09dae19", 00:26:04.592 "is_configured": true, 00:26:04.592 "data_offset": 2048, 00:26:04.592 "data_size": 63488 00:26:04.592 }, 00:26:04.592 { 00:26:04.592 "name": "BaseBdev4", 00:26:04.592 "uuid": "638e4fe4-8f5f-5f7c-b374-2cfa453552cf", 00:26:04.592 "is_configured": true, 00:26:04.592 "data_offset": 2048, 00:26:04.592 "data_size": 63488 00:26:04.592 } 00:26:04.592 ] 00:26:04.592 }' 00:26:04.849 00:39:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:04.849 00:39:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:04.849 00:39:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:04.849 00:39:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:04.849 00:39:58 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:04.849 00:39:58 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.106 00:39:58 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:26:05.106 00:39:58 -- bdev/bdev_raid.sh@709 -- # killprocess 135307 00:26:05.106 00:39:58 -- common/autotest_common.sh@936 -- # '[' -z 135307 ']' 00:26:05.106 00:39:58 -- common/autotest_common.sh@940 -- # kill -0 135307 00:26:05.106 00:39:58 -- common/autotest_common.sh@941 -- # uname 00:26:05.106 00:39:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:05.106 00:39:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135307 00:26:05.106 00:39:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:05.106 00:39:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:05.106 00:39:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135307' 00:26:05.106 killing process with pid 135307 00:26:05.106 00:39:58 -- common/autotest_common.sh@955 -- # kill 135307 00:26:05.106 Received shutdown signal, test time was about 17.407653 seconds 00:26:05.106 00:26:05.106 Latency(us) 00:26:05.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.106 =================================================================================================================== 00:26:05.106 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:05.106 00:39:58 -- common/autotest_common.sh@960 -- # wait 135307 00:26:05.106 [2024-04-24 00:39:58.779699] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:05.106 [2024-04-24 00:39:58.779802] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:05.106 [2024-04-24 00:39:58.779946] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:05.106 [2024-04-24 00:39:58.780067] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:26:05.504 [2024-04-24 00:39:59.213770] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:06.877 00:40:00 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:06.877 00:26:06.877 real 0m25.040s 00:26:06.878 user 0m39.208s 00:26:06.878 sys 0m3.603s 00:26:06.878 ************************************ 00:26:06.878 END TEST raid_rebuild_test_sb_io 00:26:06.878 ************************************ 00:26:06.878 00:40:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:06.878 00:40:00 -- common/autotest_common.sh@10 -- # set +x 00:26:06.878 00:40:00 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:26:06.878 00:40:00 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:26:06.878 00:40:00 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:26:06.878 00:40:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:26:06.878 00:40:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:06.878 00:40:00 -- common/autotest_common.sh@10 -- # set +x 00:26:07.137 ************************************ 00:26:07.137 START TEST raid5f_state_function_test 00:26:07.137 ************************************ 00:26:07.137 00:40:00 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 3 false 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:26:07.137 00:40:00 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:26:07.138 00:40:00 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:26:07.138 00:40:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=135938 00:26:07.138 00:40:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 135938' 00:26:07.138 Process raid pid: 135938 00:26:07.138 00:40:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 135938 /var/tmp/spdk-raid.sock 00:26:07.138 00:40:00 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:07.138 00:40:00 -- common/autotest_common.sh@817 -- # '[' -z 135938 ']' 00:26:07.138 00:40:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:07.138 00:40:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:07.138 00:40:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:07.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:07.138 00:40:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:07.138 00:40:00 -- common/autotest_common.sh@10 -- # set +x 00:26:07.138 [2024-04-24 00:40:00.777460] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:26:07.138 [2024-04-24 00:40:00.777757] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.397 [2024-04-24 00:40:00.943486] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.655 [2024-04-24 00:40:01.213706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.912 [2024-04-24 00:40:01.449509] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:07.912 00:40:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:07.912 00:40:01 -- common/autotest_common.sh@850 -- # return 0 00:26:07.912 00:40:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:08.170 [2024-04-24 00:40:01.961903] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:08.170 [2024-04-24 00:40:01.962132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:08.170 [2024-04-24 00:40:01.962281] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:08.170 [2024-04-24 00:40:01.962401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:08.427 [2024-04-24 00:40:01.962477] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:08.427 [2024-04-24 00:40:01.962586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:08.428 00:40:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:08.428 00:40:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:08.428 00:40:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:08.428 00:40:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:08.428 00:40:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:08.428 00:40:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:08.428 00:40:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:08.428 00:40:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:08.428 00:40:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:08.428 00:40:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:08.428 00:40:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.428 00:40:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.685 00:40:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:08.685 "name": "Existed_Raid", 00:26:08.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.685 "strip_size_kb": 64, 00:26:08.685 "state": "configuring", 00:26:08.685 "raid_level": "raid5f", 00:26:08.685 "superblock": false, 00:26:08.685 "num_base_bdevs": 3, 00:26:08.685 "num_base_bdevs_discovered": 0, 00:26:08.685 "num_base_bdevs_operational": 3, 00:26:08.685 "base_bdevs_list": [ 00:26:08.685 { 00:26:08.685 "name": "BaseBdev1", 00:26:08.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.685 "is_configured": false, 00:26:08.685 "data_offset": 0, 00:26:08.685 "data_size": 0 00:26:08.685 }, 00:26:08.685 { 00:26:08.685 "name": "BaseBdev2", 00:26:08.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.685 "is_configured": false, 00:26:08.685 "data_offset": 0, 00:26:08.685 "data_size": 0 00:26:08.685 }, 00:26:08.685 { 00:26:08.685 "name": "BaseBdev3", 00:26:08.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.685 "is_configured": false, 00:26:08.685 "data_offset": 0, 00:26:08.685 "data_size": 0 00:26:08.685 } 00:26:08.685 ] 00:26:08.685 }' 00:26:08.685 00:40:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:08.685 00:40:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.265 00:40:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:09.265 [2024-04-24 00:40:03.049995] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:09.265 [2024-04-24 00:40:03.050222] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:26:09.524 00:40:03 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:09.524 [2024-04-24 00:40:03.270065] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:09.524 [2024-04-24 00:40:03.270358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:09.524 [2024-04-24 00:40:03.270505] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:09.524 [2024-04-24 00:40:03.270592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:09.524 [2024-04-24 00:40:03.270623] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:09.524 [2024-04-24 00:40:03.270731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:09.524 00:40:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:10.089 [2024-04-24 00:40:03.577204] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:10.089 BaseBdev1 00:26:10.089 00:40:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:26:10.089 00:40:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:26:10.089 00:40:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:10.089 00:40:03 -- common/autotest_common.sh@887 -- # local i 00:26:10.089 00:40:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:10.089 00:40:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:10.089 00:40:03 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:10.089 00:40:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:10.347 [ 00:26:10.347 { 00:26:10.347 "name": "BaseBdev1", 00:26:10.347 "aliases": [ 00:26:10.347 "fa0b2f46-ff8e-42a7-94d0-197e203c183e" 00:26:10.347 ], 00:26:10.347 "product_name": "Malloc disk", 00:26:10.347 "block_size": 512, 00:26:10.347 "num_blocks": 65536, 00:26:10.347 "uuid": "fa0b2f46-ff8e-42a7-94d0-197e203c183e", 00:26:10.347 "assigned_rate_limits": { 00:26:10.347 "rw_ios_per_sec": 0, 00:26:10.347 "rw_mbytes_per_sec": 0, 00:26:10.347 "r_mbytes_per_sec": 0, 00:26:10.347 "w_mbytes_per_sec": 0 00:26:10.347 }, 00:26:10.347 "claimed": true, 00:26:10.347 "claim_type": "exclusive_write", 00:26:10.347 "zoned": false, 00:26:10.347 "supported_io_types": { 00:26:10.347 "read": true, 00:26:10.347 "write": true, 00:26:10.347 "unmap": true, 00:26:10.347 "write_zeroes": true, 00:26:10.347 "flush": true, 00:26:10.347 "reset": true, 00:26:10.347 "compare": false, 00:26:10.347 "compare_and_write": false, 00:26:10.347 "abort": true, 00:26:10.347 "nvme_admin": false, 00:26:10.347 "nvme_io": false 00:26:10.347 }, 00:26:10.347 "memory_domains": [ 00:26:10.347 { 00:26:10.347 "dma_device_id": "system", 00:26:10.347 "dma_device_type": 1 00:26:10.347 }, 00:26:10.347 { 00:26:10.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.347 "dma_device_type": 2 00:26:10.347 } 00:26:10.347 ], 00:26:10.347 "driver_specific": {} 00:26:10.347 } 00:26:10.347 ] 00:26:10.347 00:40:04 -- common/autotest_common.sh@893 -- # return 0 00:26:10.347 00:40:04 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:10.347 00:40:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:10.347 00:40:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:10.347 00:40:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:10.347 00:40:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:10.347 00:40:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:10.347 00:40:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:10.347 00:40:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:10.347 00:40:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:10.347 00:40:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:10.347 00:40:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.347 00:40:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.606 00:40:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:10.606 "name": "Existed_Raid", 00:26:10.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.606 "strip_size_kb": 64, 00:26:10.606 "state": "configuring", 00:26:10.606 "raid_level": "raid5f", 00:26:10.606 "superblock": false, 00:26:10.606 "num_base_bdevs": 3, 00:26:10.606 "num_base_bdevs_discovered": 1, 00:26:10.606 "num_base_bdevs_operational": 3, 00:26:10.606 "base_bdevs_list": [ 00:26:10.606 { 00:26:10.606 "name": "BaseBdev1", 00:26:10.606 "uuid": "fa0b2f46-ff8e-42a7-94d0-197e203c183e", 00:26:10.606 "is_configured": true, 00:26:10.606 "data_offset": 0, 00:26:10.606 "data_size": 65536 00:26:10.606 }, 00:26:10.606 { 00:26:10.606 "name": "BaseBdev2", 00:26:10.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.606 "is_configured": false, 00:26:10.606 "data_offset": 0, 00:26:10.606 "data_size": 0 00:26:10.606 }, 00:26:10.606 { 00:26:10.606 "name": "BaseBdev3", 00:26:10.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.606 "is_configured": false, 00:26:10.606 "data_offset": 0, 00:26:10.606 "data_size": 0 00:26:10.606 } 00:26:10.606 ] 00:26:10.606 }' 00:26:10.606 00:40:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:10.606 00:40:04 -- common/autotest_common.sh@10 -- # set +x 00:26:11.174 00:40:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:11.433 [2024-04-24 00:40:05.033553] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:11.433 [2024-04-24 00:40:05.033742] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:26:11.433 00:40:05 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:26:11.433 00:40:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:11.691 [2024-04-24 00:40:05.253674] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:11.691 [2024-04-24 00:40:05.255933] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:11.691 [2024-04-24 00:40:05.256096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:11.691 [2024-04-24 00:40:05.256197] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:11.691 [2024-04-24 00:40:05.256254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:11.691 00:40:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:26:11.691 00:40:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:11.691 00:40:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:11.692 00:40:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:11.692 00:40:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:11.692 00:40:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:11.692 00:40:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:11.692 00:40:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:11.692 00:40:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:11.692 00:40:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:11.692 00:40:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:11.692 00:40:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:11.692 00:40:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.692 00:40:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.950 00:40:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:11.950 "name": "Existed_Raid", 00:26:11.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.950 "strip_size_kb": 64, 00:26:11.950 "state": "configuring", 00:26:11.950 "raid_level": "raid5f", 00:26:11.950 "superblock": false, 00:26:11.950 "num_base_bdevs": 3, 00:26:11.950 "num_base_bdevs_discovered": 1, 00:26:11.950 "num_base_bdevs_operational": 3, 00:26:11.950 "base_bdevs_list": [ 00:26:11.950 { 00:26:11.950 "name": "BaseBdev1", 00:26:11.950 "uuid": "fa0b2f46-ff8e-42a7-94d0-197e203c183e", 00:26:11.950 "is_configured": true, 00:26:11.950 "data_offset": 0, 00:26:11.950 "data_size": 65536 00:26:11.950 }, 00:26:11.950 { 00:26:11.950 "name": "BaseBdev2", 00:26:11.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.950 "is_configured": false, 00:26:11.950 "data_offset": 0, 00:26:11.950 "data_size": 0 00:26:11.950 }, 00:26:11.950 { 00:26:11.950 "name": "BaseBdev3", 00:26:11.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.950 "is_configured": false, 00:26:11.950 "data_offset": 0, 00:26:11.950 "data_size": 0 00:26:11.950 } 00:26:11.950 ] 00:26:11.950 }' 00:26:11.950 00:40:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:11.950 00:40:05 -- common/autotest_common.sh@10 -- # set +x 00:26:12.518 00:40:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:12.776 [2024-04-24 00:40:06.371152] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:12.776 BaseBdev2 00:26:12.776 00:40:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:26:12.776 00:40:06 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:26:12.776 00:40:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:12.776 00:40:06 -- common/autotest_common.sh@887 -- # local i 00:26:12.776 00:40:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:12.776 00:40:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:12.776 00:40:06 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:13.034 00:40:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:13.292 [ 00:26:13.292 { 00:26:13.292 "name": "BaseBdev2", 00:26:13.292 "aliases": [ 00:26:13.292 "966817c7-c9e9-49fb-b29f-bd6a8c0a23bf" 00:26:13.292 ], 00:26:13.292 "product_name": "Malloc disk", 00:26:13.292 "block_size": 512, 00:26:13.292 "num_blocks": 65536, 00:26:13.292 "uuid": "966817c7-c9e9-49fb-b29f-bd6a8c0a23bf", 00:26:13.292 "assigned_rate_limits": { 00:26:13.292 "rw_ios_per_sec": 0, 00:26:13.292 "rw_mbytes_per_sec": 0, 00:26:13.292 "r_mbytes_per_sec": 0, 00:26:13.292 "w_mbytes_per_sec": 0 00:26:13.292 }, 00:26:13.292 "claimed": true, 00:26:13.292 "claim_type": "exclusive_write", 00:26:13.292 "zoned": false, 00:26:13.292 "supported_io_types": { 00:26:13.292 "read": true, 00:26:13.292 "write": true, 00:26:13.292 "unmap": true, 00:26:13.292 "write_zeroes": true, 00:26:13.292 "flush": true, 00:26:13.292 "reset": true, 00:26:13.292 "compare": false, 00:26:13.292 "compare_and_write": false, 00:26:13.292 "abort": true, 00:26:13.292 "nvme_admin": false, 00:26:13.292 "nvme_io": false 00:26:13.292 }, 00:26:13.292 "memory_domains": [ 00:26:13.292 { 00:26:13.292 "dma_device_id": "system", 00:26:13.292 "dma_device_type": 1 00:26:13.292 }, 00:26:13.292 { 00:26:13.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.292 "dma_device_type": 2 00:26:13.292 } 00:26:13.292 ], 00:26:13.292 "driver_specific": {} 00:26:13.292 } 00:26:13.292 ] 00:26:13.292 00:40:06 -- common/autotest_common.sh@893 -- # return 0 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.292 00:40:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.551 00:40:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:13.551 "name": "Existed_Raid", 00:26:13.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.551 "strip_size_kb": 64, 00:26:13.551 "state": "configuring", 00:26:13.551 "raid_level": "raid5f", 00:26:13.551 "superblock": false, 00:26:13.551 "num_base_bdevs": 3, 00:26:13.551 "num_base_bdevs_discovered": 2, 00:26:13.551 "num_base_bdevs_operational": 3, 00:26:13.551 "base_bdevs_list": [ 00:26:13.551 { 00:26:13.551 "name": "BaseBdev1", 00:26:13.551 "uuid": "fa0b2f46-ff8e-42a7-94d0-197e203c183e", 00:26:13.551 "is_configured": true, 00:26:13.551 "data_offset": 0, 00:26:13.551 "data_size": 65536 00:26:13.551 }, 00:26:13.551 { 00:26:13.551 "name": "BaseBdev2", 00:26:13.551 "uuid": "966817c7-c9e9-49fb-b29f-bd6a8c0a23bf", 00:26:13.551 "is_configured": true, 00:26:13.551 "data_offset": 0, 00:26:13.551 "data_size": 65536 00:26:13.551 }, 00:26:13.551 { 00:26:13.551 "name": "BaseBdev3", 00:26:13.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.551 "is_configured": false, 00:26:13.551 "data_offset": 0, 00:26:13.551 "data_size": 0 00:26:13.551 } 00:26:13.551 ] 00:26:13.551 }' 00:26:13.551 00:40:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:13.551 00:40:07 -- common/autotest_common.sh@10 -- # set +x 00:26:14.118 00:40:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:14.377 [2024-04-24 00:40:08.015741] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:14.377 [2024-04-24 00:40:08.016008] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:26:14.377 [2024-04-24 00:40:08.016053] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:14.377 [2024-04-24 00:40:08.016287] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:26:14.377 [2024-04-24 00:40:08.022774] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:26:14.377 [2024-04-24 00:40:08.022910] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:26:14.377 [2024-04-24 00:40:08.023316] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.377 BaseBdev3 00:26:14.377 00:40:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:26:14.377 00:40:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:26:14.377 00:40:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:14.377 00:40:08 -- common/autotest_common.sh@887 -- # local i 00:26:14.377 00:40:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:14.377 00:40:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:14.377 00:40:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:14.647 00:40:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:14.906 [ 00:26:14.906 { 00:26:14.906 "name": "BaseBdev3", 00:26:14.906 "aliases": [ 00:26:14.906 "293780b7-f8ed-4364-81dc-4e03dcf38f9e" 00:26:14.906 ], 00:26:14.906 "product_name": "Malloc disk", 00:26:14.906 "block_size": 512, 00:26:14.906 "num_blocks": 65536, 00:26:14.906 "uuid": "293780b7-f8ed-4364-81dc-4e03dcf38f9e", 00:26:14.906 "assigned_rate_limits": { 00:26:14.906 "rw_ios_per_sec": 0, 00:26:14.906 "rw_mbytes_per_sec": 0, 00:26:14.906 "r_mbytes_per_sec": 0, 00:26:14.906 "w_mbytes_per_sec": 0 00:26:14.906 }, 00:26:14.906 "claimed": true, 00:26:14.906 "claim_type": "exclusive_write", 00:26:14.906 "zoned": false, 00:26:14.906 "supported_io_types": { 00:26:14.906 "read": true, 00:26:14.906 "write": true, 00:26:14.906 "unmap": true, 00:26:14.906 "write_zeroes": true, 00:26:14.906 "flush": true, 00:26:14.906 "reset": true, 00:26:14.906 "compare": false, 00:26:14.906 "compare_and_write": false, 00:26:14.906 "abort": true, 00:26:14.906 "nvme_admin": false, 00:26:14.906 "nvme_io": false 00:26:14.906 }, 00:26:14.906 "memory_domains": [ 00:26:14.906 { 00:26:14.906 "dma_device_id": "system", 00:26:14.906 "dma_device_type": 1 00:26:14.906 }, 00:26:14.906 { 00:26:14.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.906 "dma_device_type": 2 00:26:14.906 } 00:26:14.906 ], 00:26:14.906 "driver_specific": {} 00:26:14.906 } 00:26:14.906 ] 00:26:14.906 00:40:08 -- common/autotest_common.sh@893 -- # return 0 00:26:14.906 00:40:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:14.906 00:40:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:14.906 00:40:08 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:14.906 00:40:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:14.906 00:40:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:14.906 00:40:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:14.906 00:40:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:14.906 00:40:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:14.906 00:40:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:14.906 00:40:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:14.906 00:40:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:14.907 00:40:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:14.907 00:40:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.907 00:40:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.163 00:40:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:15.163 "name": "Existed_Raid", 00:26:15.163 "uuid": "540b5582-059a-4450-aebc-dc0068122082", 00:26:15.163 "strip_size_kb": 64, 00:26:15.163 "state": "online", 00:26:15.163 "raid_level": "raid5f", 00:26:15.163 "superblock": false, 00:26:15.163 "num_base_bdevs": 3, 00:26:15.163 "num_base_bdevs_discovered": 3, 00:26:15.163 "num_base_bdevs_operational": 3, 00:26:15.163 "base_bdevs_list": [ 00:26:15.163 { 00:26:15.163 "name": "BaseBdev1", 00:26:15.163 "uuid": "fa0b2f46-ff8e-42a7-94d0-197e203c183e", 00:26:15.163 "is_configured": true, 00:26:15.163 "data_offset": 0, 00:26:15.163 "data_size": 65536 00:26:15.163 }, 00:26:15.163 { 00:26:15.163 "name": "BaseBdev2", 00:26:15.163 "uuid": "966817c7-c9e9-49fb-b29f-bd6a8c0a23bf", 00:26:15.163 "is_configured": true, 00:26:15.163 "data_offset": 0, 00:26:15.163 "data_size": 65536 00:26:15.163 }, 00:26:15.163 { 00:26:15.163 "name": "BaseBdev3", 00:26:15.163 "uuid": "293780b7-f8ed-4364-81dc-4e03dcf38f9e", 00:26:15.163 "is_configured": true, 00:26:15.163 "data_offset": 0, 00:26:15.163 "data_size": 65536 00:26:15.163 } 00:26:15.163 ] 00:26:15.163 }' 00:26:15.163 00:40:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:15.163 00:40:08 -- common/autotest_common.sh@10 -- # set +x 00:26:15.728 00:40:09 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:15.986 [2024-04-24 00:40:09.602888] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.986 00:40:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.244 00:40:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:16.244 "name": "Existed_Raid", 00:26:16.244 "uuid": "540b5582-059a-4450-aebc-dc0068122082", 00:26:16.244 "strip_size_kb": 64, 00:26:16.244 "state": "online", 00:26:16.244 "raid_level": "raid5f", 00:26:16.244 "superblock": false, 00:26:16.244 "num_base_bdevs": 3, 00:26:16.244 "num_base_bdevs_discovered": 2, 00:26:16.244 "num_base_bdevs_operational": 2, 00:26:16.244 "base_bdevs_list": [ 00:26:16.244 { 00:26:16.244 "name": null, 00:26:16.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.244 "is_configured": false, 00:26:16.244 "data_offset": 0, 00:26:16.244 "data_size": 65536 00:26:16.244 }, 00:26:16.244 { 00:26:16.244 "name": "BaseBdev2", 00:26:16.244 "uuid": "966817c7-c9e9-49fb-b29f-bd6a8c0a23bf", 00:26:16.244 "is_configured": true, 00:26:16.244 "data_offset": 0, 00:26:16.244 "data_size": 65536 00:26:16.244 }, 00:26:16.244 { 00:26:16.244 "name": "BaseBdev3", 00:26:16.244 "uuid": "293780b7-f8ed-4364-81dc-4e03dcf38f9e", 00:26:16.244 "is_configured": true, 00:26:16.244 "data_offset": 0, 00:26:16.244 "data_size": 65536 00:26:16.244 } 00:26:16.244 ] 00:26:16.244 }' 00:26:16.244 00:40:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:16.244 00:40:10 -- common/autotest_common.sh@10 -- # set +x 00:26:17.179 00:40:10 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:26:17.179 00:40:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:17.179 00:40:10 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.179 00:40:10 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:17.179 00:40:10 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:17.179 00:40:10 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:17.179 00:40:10 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:17.437 [2024-04-24 00:40:11.181358] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:17.437 [2024-04-24 00:40:11.181663] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:17.695 [2024-04-24 00:40:11.291611] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:17.695 00:40:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:17.695 00:40:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:17.695 00:40:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.695 00:40:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:17.953 00:40:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:17.953 00:40:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:17.953 00:40:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:18.211 [2024-04-24 00:40:11.843975] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:18.211 [2024-04-24 00:40:11.844247] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:26:18.211 00:40:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:18.211 00:40:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:18.211 00:40:11 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.211 00:40:11 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:26:18.469 00:40:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:26:18.469 00:40:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:26:18.469 00:40:12 -- bdev/bdev_raid.sh@287 -- # killprocess 135938 00:26:18.469 00:40:12 -- common/autotest_common.sh@936 -- # '[' -z 135938 ']' 00:26:18.469 00:40:12 -- common/autotest_common.sh@940 -- # kill -0 135938 00:26:18.469 00:40:12 -- common/autotest_common.sh@941 -- # uname 00:26:18.727 00:40:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:18.727 00:40:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135938 00:26:18.727 killing process with pid 135938 00:26:18.727 00:40:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:18.727 00:40:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:18.727 00:40:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135938' 00:26:18.727 00:40:12 -- common/autotest_common.sh@955 -- # kill 135938 00:26:18.727 00:40:12 -- common/autotest_common.sh@960 -- # wait 135938 00:26:18.727 [2024-04-24 00:40:12.282854] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:18.727 [2024-04-24 00:40:12.282993] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:20.101 ************************************ 00:26:20.101 END TEST raid5f_state_function_test 00:26:20.101 ************************************ 00:26:20.101 00:40:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:26:20.101 00:26:20.101 real 0m12.868s 00:26:20.101 user 0m22.033s 00:26:20.101 sys 0m1.818s 00:26:20.101 00:40:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:20.101 00:40:13 -- common/autotest_common.sh@10 -- # set +x 00:26:20.101 00:40:13 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:26:20.101 00:40:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:26:20.101 00:40:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:20.101 00:40:13 -- common/autotest_common.sh@10 -- # set +x 00:26:20.101 ************************************ 00:26:20.101 START TEST raid5f_state_function_test_sb 00:26:20.101 ************************************ 00:26:20.101 00:40:13 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 3 true 00:26:20.101 00:40:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:26:20.101 00:40:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:26:20.101 00:40:13 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:26:20.101 00:40:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:26:20.101 00:40:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=136337 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 136337' 00:26:20.102 Process raid pid: 136337 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:20.102 00:40:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 136337 /var/tmp/spdk-raid.sock 00:26:20.102 00:40:13 -- common/autotest_common.sh@817 -- # '[' -z 136337 ']' 00:26:20.102 00:40:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:20.102 00:40:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:20.102 00:40:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:20.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:20.102 00:40:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:20.102 00:40:13 -- common/autotest_common.sh@10 -- # set +x 00:26:20.102 [2024-04-24 00:40:13.775291] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:26:20.102 [2024-04-24 00:40:13.775704] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.360 [2024-04-24 00:40:13.955158] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.618 [2024-04-24 00:40:14.240138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.894 [2024-04-24 00:40:14.462174] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:21.190 00:40:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:21.190 00:40:14 -- common/autotest_common.sh@850 -- # return 0 00:26:21.190 00:40:14 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:21.190 [2024-04-24 00:40:14.956748] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:21.190 [2024-04-24 00:40:14.957002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:21.191 [2024-04-24 00:40:14.957127] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:21.191 [2024-04-24 00:40:14.957246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:21.191 [2024-04-24 00:40:14.957319] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:21.191 [2024-04-24 00:40:14.957395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:21.191 00:40:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:21.191 00:40:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:21.191 00:40:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:21.191 00:40:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:21.191 00:40:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:21.191 00:40:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:21.191 00:40:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:21.191 00:40:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:21.191 00:40:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:21.191 00:40:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:21.191 00:40:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.191 00:40:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:21.758 00:40:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:21.758 "name": "Existed_Raid", 00:26:21.758 "uuid": "de4e44b3-1b27-457b-bb07-61bda0befc01", 00:26:21.758 "strip_size_kb": 64, 00:26:21.758 "state": "configuring", 00:26:21.758 "raid_level": "raid5f", 00:26:21.758 "superblock": true, 00:26:21.758 "num_base_bdevs": 3, 00:26:21.758 "num_base_bdevs_discovered": 0, 00:26:21.758 "num_base_bdevs_operational": 3, 00:26:21.758 "base_bdevs_list": [ 00:26:21.758 { 00:26:21.758 "name": "BaseBdev1", 00:26:21.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.758 "is_configured": false, 00:26:21.758 "data_offset": 0, 00:26:21.758 "data_size": 0 00:26:21.758 }, 00:26:21.758 { 00:26:21.758 "name": "BaseBdev2", 00:26:21.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.758 "is_configured": false, 00:26:21.758 "data_offset": 0, 00:26:21.758 "data_size": 0 00:26:21.758 }, 00:26:21.758 { 00:26:21.758 "name": "BaseBdev3", 00:26:21.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.758 "is_configured": false, 00:26:21.758 "data_offset": 0, 00:26:21.758 "data_size": 0 00:26:21.758 } 00:26:21.758 ] 00:26:21.758 }' 00:26:21.758 00:40:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:21.758 00:40:15 -- common/autotest_common.sh@10 -- # set +x 00:26:22.017 00:40:15 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:22.276 [2024-04-24 00:40:16.008830] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:22.276 [2024-04-24 00:40:16.009059] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:26:22.276 00:40:16 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:22.534 [2024-04-24 00:40:16.264904] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:22.534 [2024-04-24 00:40:16.265134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:22.534 [2024-04-24 00:40:16.265243] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:22.534 [2024-04-24 00:40:16.265299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:22.534 [2024-04-24 00:40:16.265329] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:22.534 [2024-04-24 00:40:16.265443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:22.534 00:40:16 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:23.177 [2024-04-24 00:40:16.688849] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:23.177 BaseBdev1 00:26:23.177 00:40:16 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:26:23.177 00:40:16 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:26:23.177 00:40:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:23.177 00:40:16 -- common/autotest_common.sh@887 -- # local i 00:26:23.177 00:40:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:23.177 00:40:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:23.177 00:40:16 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:23.177 00:40:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:23.437 [ 00:26:23.437 { 00:26:23.437 "name": "BaseBdev1", 00:26:23.437 "aliases": [ 00:26:23.437 "3276f008-d245-4c80-881f-7be3df6653dd" 00:26:23.437 ], 00:26:23.437 "product_name": "Malloc disk", 00:26:23.437 "block_size": 512, 00:26:23.437 "num_blocks": 65536, 00:26:23.437 "uuid": "3276f008-d245-4c80-881f-7be3df6653dd", 00:26:23.437 "assigned_rate_limits": { 00:26:23.437 "rw_ios_per_sec": 0, 00:26:23.438 "rw_mbytes_per_sec": 0, 00:26:23.438 "r_mbytes_per_sec": 0, 00:26:23.438 "w_mbytes_per_sec": 0 00:26:23.438 }, 00:26:23.438 "claimed": true, 00:26:23.438 "claim_type": "exclusive_write", 00:26:23.438 "zoned": false, 00:26:23.438 "supported_io_types": { 00:26:23.438 "read": true, 00:26:23.438 "write": true, 00:26:23.438 "unmap": true, 00:26:23.438 "write_zeroes": true, 00:26:23.438 "flush": true, 00:26:23.438 "reset": true, 00:26:23.438 "compare": false, 00:26:23.438 "compare_and_write": false, 00:26:23.438 "abort": true, 00:26:23.438 "nvme_admin": false, 00:26:23.438 "nvme_io": false 00:26:23.438 }, 00:26:23.438 "memory_domains": [ 00:26:23.438 { 00:26:23.438 "dma_device_id": "system", 00:26:23.438 "dma_device_type": 1 00:26:23.438 }, 00:26:23.438 { 00:26:23.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.438 "dma_device_type": 2 00:26:23.438 } 00:26:23.438 ], 00:26:23.438 "driver_specific": {} 00:26:23.438 } 00:26:23.438 ] 00:26:23.438 00:40:17 -- common/autotest_common.sh@893 -- # return 0 00:26:23.438 00:40:17 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:23.438 00:40:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:23.438 00:40:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:23.438 00:40:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:23.438 00:40:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:23.438 00:40:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:23.438 00:40:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:23.438 00:40:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:23.438 00:40:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:23.438 00:40:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:23.438 00:40:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:23.438 00:40:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.696 00:40:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:23.696 "name": "Existed_Raid", 00:26:23.696 "uuid": "f6a7150e-2198-4b05-8193-91fb32839f90", 00:26:23.696 "strip_size_kb": 64, 00:26:23.696 "state": "configuring", 00:26:23.696 "raid_level": "raid5f", 00:26:23.696 "superblock": true, 00:26:23.696 "num_base_bdevs": 3, 00:26:23.696 "num_base_bdevs_discovered": 1, 00:26:23.696 "num_base_bdevs_operational": 3, 00:26:23.696 "base_bdevs_list": [ 00:26:23.696 { 00:26:23.696 "name": "BaseBdev1", 00:26:23.696 "uuid": "3276f008-d245-4c80-881f-7be3df6653dd", 00:26:23.696 "is_configured": true, 00:26:23.696 "data_offset": 2048, 00:26:23.696 "data_size": 63488 00:26:23.696 }, 00:26:23.696 { 00:26:23.696 "name": "BaseBdev2", 00:26:23.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.696 "is_configured": false, 00:26:23.696 "data_offset": 0, 00:26:23.696 "data_size": 0 00:26:23.696 }, 00:26:23.696 { 00:26:23.696 "name": "BaseBdev3", 00:26:23.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.696 "is_configured": false, 00:26:23.696 "data_offset": 0, 00:26:23.696 "data_size": 0 00:26:23.696 } 00:26:23.696 ] 00:26:23.696 }' 00:26:23.696 00:40:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:23.696 00:40:17 -- common/autotest_common.sh@10 -- # set +x 00:26:24.262 00:40:17 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:24.521 [2024-04-24 00:40:18.133162] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:24.521 [2024-04-24 00:40:18.133379] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:26:24.521 00:40:18 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:26:24.521 00:40:18 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:24.781 00:40:18 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:25.040 BaseBdev1 00:26:25.299 00:40:18 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:26:25.299 00:40:18 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:26:25.299 00:40:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:25.299 00:40:18 -- common/autotest_common.sh@887 -- # local i 00:26:25.299 00:40:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:25.299 00:40:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:25.299 00:40:18 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:25.299 00:40:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:25.558 [ 00:26:25.558 { 00:26:25.558 "name": "BaseBdev1", 00:26:25.558 "aliases": [ 00:26:25.558 "9d486d10-2c9a-43ec-af0a-1a8abb6ce8a8" 00:26:25.558 ], 00:26:25.558 "product_name": "Malloc disk", 00:26:25.558 "block_size": 512, 00:26:25.558 "num_blocks": 65536, 00:26:25.558 "uuid": "9d486d10-2c9a-43ec-af0a-1a8abb6ce8a8", 00:26:25.558 "assigned_rate_limits": { 00:26:25.558 "rw_ios_per_sec": 0, 00:26:25.558 "rw_mbytes_per_sec": 0, 00:26:25.558 "r_mbytes_per_sec": 0, 00:26:25.558 "w_mbytes_per_sec": 0 00:26:25.558 }, 00:26:25.558 "claimed": false, 00:26:25.558 "zoned": false, 00:26:25.558 "supported_io_types": { 00:26:25.558 "read": true, 00:26:25.558 "write": true, 00:26:25.558 "unmap": true, 00:26:25.558 "write_zeroes": true, 00:26:25.558 "flush": true, 00:26:25.558 "reset": true, 00:26:25.558 "compare": false, 00:26:25.558 "compare_and_write": false, 00:26:25.558 "abort": true, 00:26:25.558 "nvme_admin": false, 00:26:25.558 "nvme_io": false 00:26:25.558 }, 00:26:25.558 "memory_domains": [ 00:26:25.558 { 00:26:25.558 "dma_device_id": "system", 00:26:25.558 "dma_device_type": 1 00:26:25.558 }, 00:26:25.558 { 00:26:25.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.558 "dma_device_type": 2 00:26:25.558 } 00:26:25.558 ], 00:26:25.558 "driver_specific": {} 00:26:25.558 } 00:26:25.558 ] 00:26:25.558 00:40:19 -- common/autotest_common.sh@893 -- # return 0 00:26:25.558 00:40:19 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:26:25.817 [2024-04-24 00:40:19.411962] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:25.817 [2024-04-24 00:40:19.414033] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:25.817 [2024-04-24 00:40:19.414194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:25.817 [2024-04-24 00:40:19.414277] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:25.817 [2024-04-24 00:40:19.414333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.817 00:40:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.076 00:40:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:26.076 "name": "Existed_Raid", 00:26:26.076 "uuid": "d37d33f2-124e-4653-80c0-f9b5c04f27bf", 00:26:26.076 "strip_size_kb": 64, 00:26:26.076 "state": "configuring", 00:26:26.076 "raid_level": "raid5f", 00:26:26.076 "superblock": true, 00:26:26.076 "num_base_bdevs": 3, 00:26:26.076 "num_base_bdevs_discovered": 1, 00:26:26.076 "num_base_bdevs_operational": 3, 00:26:26.076 "base_bdevs_list": [ 00:26:26.076 { 00:26:26.076 "name": "BaseBdev1", 00:26:26.076 "uuid": "9d486d10-2c9a-43ec-af0a-1a8abb6ce8a8", 00:26:26.076 "is_configured": true, 00:26:26.076 "data_offset": 2048, 00:26:26.076 "data_size": 63488 00:26:26.076 }, 00:26:26.076 { 00:26:26.076 "name": "BaseBdev2", 00:26:26.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.076 "is_configured": false, 00:26:26.076 "data_offset": 0, 00:26:26.076 "data_size": 0 00:26:26.076 }, 00:26:26.076 { 00:26:26.076 "name": "BaseBdev3", 00:26:26.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.076 "is_configured": false, 00:26:26.076 "data_offset": 0, 00:26:26.076 "data_size": 0 00:26:26.076 } 00:26:26.076 ] 00:26:26.076 }' 00:26:26.076 00:40:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:26.076 00:40:19 -- common/autotest_common.sh@10 -- # set +x 00:26:26.641 00:40:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:26.900 [2024-04-24 00:40:20.475605] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:26.900 BaseBdev2 00:26:26.900 00:40:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:26:26.900 00:40:20 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:26:26.900 00:40:20 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:26.900 00:40:20 -- common/autotest_common.sh@887 -- # local i 00:26:26.900 00:40:20 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:26.900 00:40:20 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:26.900 00:40:20 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:27.225 00:40:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:27.225 [ 00:26:27.225 { 00:26:27.225 "name": "BaseBdev2", 00:26:27.225 "aliases": [ 00:26:27.225 "92c81abf-5deb-478b-9e80-4051e495858d" 00:26:27.225 ], 00:26:27.225 "product_name": "Malloc disk", 00:26:27.225 "block_size": 512, 00:26:27.225 "num_blocks": 65536, 00:26:27.225 "uuid": "92c81abf-5deb-478b-9e80-4051e495858d", 00:26:27.225 "assigned_rate_limits": { 00:26:27.225 "rw_ios_per_sec": 0, 00:26:27.225 "rw_mbytes_per_sec": 0, 00:26:27.225 "r_mbytes_per_sec": 0, 00:26:27.225 "w_mbytes_per_sec": 0 00:26:27.225 }, 00:26:27.225 "claimed": true, 00:26:27.225 "claim_type": "exclusive_write", 00:26:27.225 "zoned": false, 00:26:27.225 "supported_io_types": { 00:26:27.225 "read": true, 00:26:27.225 "write": true, 00:26:27.225 "unmap": true, 00:26:27.225 "write_zeroes": true, 00:26:27.225 "flush": true, 00:26:27.225 "reset": true, 00:26:27.225 "compare": false, 00:26:27.225 "compare_and_write": false, 00:26:27.225 "abort": true, 00:26:27.225 "nvme_admin": false, 00:26:27.225 "nvme_io": false 00:26:27.225 }, 00:26:27.225 "memory_domains": [ 00:26:27.225 { 00:26:27.225 "dma_device_id": "system", 00:26:27.225 "dma_device_type": 1 00:26:27.225 }, 00:26:27.225 { 00:26:27.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:27.225 "dma_device_type": 2 00:26:27.225 } 00:26:27.225 ], 00:26:27.225 "driver_specific": {} 00:26:27.225 } 00:26:27.225 ] 00:26:27.225 00:40:20 -- common/autotest_common.sh@893 -- # return 0 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.225 00:40:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:27.484 00:40:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:27.484 "name": "Existed_Raid", 00:26:27.484 "uuid": "d37d33f2-124e-4653-80c0-f9b5c04f27bf", 00:26:27.484 "strip_size_kb": 64, 00:26:27.484 "state": "configuring", 00:26:27.484 "raid_level": "raid5f", 00:26:27.484 "superblock": true, 00:26:27.484 "num_base_bdevs": 3, 00:26:27.484 "num_base_bdevs_discovered": 2, 00:26:27.484 "num_base_bdevs_operational": 3, 00:26:27.484 "base_bdevs_list": [ 00:26:27.484 { 00:26:27.484 "name": "BaseBdev1", 00:26:27.484 "uuid": "9d486d10-2c9a-43ec-af0a-1a8abb6ce8a8", 00:26:27.484 "is_configured": true, 00:26:27.484 "data_offset": 2048, 00:26:27.484 "data_size": 63488 00:26:27.484 }, 00:26:27.484 { 00:26:27.484 "name": "BaseBdev2", 00:26:27.484 "uuid": "92c81abf-5deb-478b-9e80-4051e495858d", 00:26:27.484 "is_configured": true, 00:26:27.484 "data_offset": 2048, 00:26:27.484 "data_size": 63488 00:26:27.484 }, 00:26:27.484 { 00:26:27.484 "name": "BaseBdev3", 00:26:27.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.484 "is_configured": false, 00:26:27.484 "data_offset": 0, 00:26:27.484 "data_size": 0 00:26:27.484 } 00:26:27.484 ] 00:26:27.484 }' 00:26:27.484 00:40:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:27.484 00:40:21 -- common/autotest_common.sh@10 -- # set +x 00:26:28.050 00:40:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:28.308 [2024-04-24 00:40:21.883476] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:28.308 [2024-04-24 00:40:21.883966] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:26:28.308 [2024-04-24 00:40:21.884100] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:28.308 [2024-04-24 00:40:21.884273] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:26:28.308 BaseBdev3 00:26:28.308 [2024-04-24 00:40:21.891227] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:26:28.308 [2024-04-24 00:40:21.891380] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:26:28.308 [2024-04-24 00:40:21.891681] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:28.308 00:40:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:26:28.308 00:40:21 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:26:28.308 00:40:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:28.308 00:40:21 -- common/autotest_common.sh@887 -- # local i 00:26:28.308 00:40:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:28.308 00:40:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:28.308 00:40:21 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:28.308 00:40:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:28.567 [ 00:26:28.567 { 00:26:28.567 "name": "BaseBdev3", 00:26:28.567 "aliases": [ 00:26:28.567 "8b3549fd-53e5-4306-bfb6-04a69366795d" 00:26:28.567 ], 00:26:28.567 "product_name": "Malloc disk", 00:26:28.567 "block_size": 512, 00:26:28.567 "num_blocks": 65536, 00:26:28.567 "uuid": "8b3549fd-53e5-4306-bfb6-04a69366795d", 00:26:28.567 "assigned_rate_limits": { 00:26:28.567 "rw_ios_per_sec": 0, 00:26:28.567 "rw_mbytes_per_sec": 0, 00:26:28.567 "r_mbytes_per_sec": 0, 00:26:28.567 "w_mbytes_per_sec": 0 00:26:28.567 }, 00:26:28.567 "claimed": true, 00:26:28.567 "claim_type": "exclusive_write", 00:26:28.567 "zoned": false, 00:26:28.567 "supported_io_types": { 00:26:28.567 "read": true, 00:26:28.567 "write": true, 00:26:28.567 "unmap": true, 00:26:28.567 "write_zeroes": true, 00:26:28.567 "flush": true, 00:26:28.567 "reset": true, 00:26:28.567 "compare": false, 00:26:28.567 "compare_and_write": false, 00:26:28.567 "abort": true, 00:26:28.567 "nvme_admin": false, 00:26:28.567 "nvme_io": false 00:26:28.567 }, 00:26:28.567 "memory_domains": [ 00:26:28.567 { 00:26:28.567 "dma_device_id": "system", 00:26:28.567 "dma_device_type": 1 00:26:28.567 }, 00:26:28.567 { 00:26:28.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:28.568 "dma_device_type": 2 00:26:28.568 } 00:26:28.568 ], 00:26:28.568 "driver_specific": {} 00:26:28.568 } 00:26:28.568 ] 00:26:28.568 00:40:22 -- common/autotest_common.sh@893 -- # return 0 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.568 00:40:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.826 00:40:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:28.826 "name": "Existed_Raid", 00:26:28.826 "uuid": "d37d33f2-124e-4653-80c0-f9b5c04f27bf", 00:26:28.826 "strip_size_kb": 64, 00:26:28.826 "state": "online", 00:26:28.826 "raid_level": "raid5f", 00:26:28.826 "superblock": true, 00:26:28.826 "num_base_bdevs": 3, 00:26:28.826 "num_base_bdevs_discovered": 3, 00:26:28.826 "num_base_bdevs_operational": 3, 00:26:28.826 "base_bdevs_list": [ 00:26:28.826 { 00:26:28.826 "name": "BaseBdev1", 00:26:28.826 "uuid": "9d486d10-2c9a-43ec-af0a-1a8abb6ce8a8", 00:26:28.826 "is_configured": true, 00:26:28.826 "data_offset": 2048, 00:26:28.826 "data_size": 63488 00:26:28.826 }, 00:26:28.826 { 00:26:28.826 "name": "BaseBdev2", 00:26:28.826 "uuid": "92c81abf-5deb-478b-9e80-4051e495858d", 00:26:28.826 "is_configured": true, 00:26:28.826 "data_offset": 2048, 00:26:28.826 "data_size": 63488 00:26:28.826 }, 00:26:28.826 { 00:26:28.826 "name": "BaseBdev3", 00:26:28.826 "uuid": "8b3549fd-53e5-4306-bfb6-04a69366795d", 00:26:28.826 "is_configured": true, 00:26:28.826 "data_offset": 2048, 00:26:28.826 "data_size": 63488 00:26:28.826 } 00:26:28.826 ] 00:26:28.826 }' 00:26:28.826 00:40:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:28.826 00:40:22 -- common/autotest_common.sh@10 -- # set +x 00:26:29.393 00:40:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:29.653 [2024-04-24 00:40:23.380287] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.915 00:40:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:30.173 00:40:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:30.173 "name": "Existed_Raid", 00:26:30.173 "uuid": "d37d33f2-124e-4653-80c0-f9b5c04f27bf", 00:26:30.173 "strip_size_kb": 64, 00:26:30.173 "state": "online", 00:26:30.173 "raid_level": "raid5f", 00:26:30.173 "superblock": true, 00:26:30.173 "num_base_bdevs": 3, 00:26:30.173 "num_base_bdevs_discovered": 2, 00:26:30.173 "num_base_bdevs_operational": 2, 00:26:30.173 "base_bdevs_list": [ 00:26:30.173 { 00:26:30.173 "name": null, 00:26:30.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.173 "is_configured": false, 00:26:30.173 "data_offset": 2048, 00:26:30.173 "data_size": 63488 00:26:30.173 }, 00:26:30.173 { 00:26:30.173 "name": "BaseBdev2", 00:26:30.174 "uuid": "92c81abf-5deb-478b-9e80-4051e495858d", 00:26:30.174 "is_configured": true, 00:26:30.174 "data_offset": 2048, 00:26:30.174 "data_size": 63488 00:26:30.174 }, 00:26:30.174 { 00:26:30.174 "name": "BaseBdev3", 00:26:30.174 "uuid": "8b3549fd-53e5-4306-bfb6-04a69366795d", 00:26:30.174 "is_configured": true, 00:26:30.174 "data_offset": 2048, 00:26:30.174 "data_size": 63488 00:26:30.174 } 00:26:30.174 ] 00:26:30.174 }' 00:26:30.174 00:40:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:30.174 00:40:23 -- common/autotest_common.sh@10 -- # set +x 00:26:30.740 00:40:24 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:26:30.740 00:40:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:30.740 00:40:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:30.740 00:40:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.998 00:40:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:30.998 00:40:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:30.998 00:40:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:30.998 [2024-04-24 00:40:24.777621] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:30.998 [2024-04-24 00:40:24.777951] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:31.255 [2024-04-24 00:40:24.880374] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:31.255 00:40:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:31.255 00:40:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:31.255 00:40:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:31.255 00:40:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.514 00:40:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:31.514 00:40:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:31.514 00:40:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:31.771 [2024-04-24 00:40:25.356607] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:31.771 [2024-04-24 00:40:25.356848] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:26:31.771 00:40:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:31.771 00:40:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:31.772 00:40:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:26:31.772 00:40:25 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.031 00:40:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:26:32.031 00:40:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:26:32.031 00:40:25 -- bdev/bdev_raid.sh@287 -- # killprocess 136337 00:26:32.031 00:40:25 -- common/autotest_common.sh@936 -- # '[' -z 136337 ']' 00:26:32.031 00:40:25 -- common/autotest_common.sh@940 -- # kill -0 136337 00:26:32.031 00:40:25 -- common/autotest_common.sh@941 -- # uname 00:26:32.031 00:40:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:32.031 00:40:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136337 00:26:32.031 00:40:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:32.031 00:40:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:32.031 00:40:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136337' 00:26:32.031 killing process with pid 136337 00:26:32.031 00:40:25 -- common/autotest_common.sh@955 -- # kill 136337 00:26:32.031 [2024-04-24 00:40:25.774090] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:32.031 00:40:25 -- common/autotest_common.sh@960 -- # wait 136337 00:26:32.031 [2024-04-24 00:40:25.774365] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:33.408 ************************************ 00:26:33.409 END TEST raid5f_state_function_test_sb 00:26:33.409 ************************************ 00:26:33.409 00:40:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:26:33.409 00:26:33.409 real 0m13.469s 00:26:33.409 user 0m22.904s 00:26:33.409 sys 0m1.934s 00:26:33.409 00:40:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:33.409 00:40:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:26:33.668 00:40:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:33.668 00:40:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:33.668 00:40:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.668 ************************************ 00:26:33.668 START TEST raid5f_superblock_test 00:26:33.668 ************************************ 00:26:33.668 00:40:27 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid5f 3 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@357 -- # raid_pid=136801 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@358 -- # waitforlisten 136801 /var/tmp/spdk-raid.sock 00:26:33.668 00:40:27 -- common/autotest_common.sh@817 -- # '[' -z 136801 ']' 00:26:33.668 00:40:27 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:33.668 00:40:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:33.668 00:40:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:33.668 00:40:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:33.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:33.668 00:40:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:33.668 00:40:27 -- common/autotest_common.sh@10 -- # set +x 00:26:33.668 [2024-04-24 00:40:27.357083] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:26:33.668 [2024-04-24 00:40:27.357279] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136801 ] 00:26:33.927 [2024-04-24 00:40:27.541326] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.185 [2024-04-24 00:40:27.841106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.443 [2024-04-24 00:40:28.086501] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:34.700 00:40:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:34.700 00:40:28 -- common/autotest_common.sh@850 -- # return 0 00:26:34.700 00:40:28 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:26:34.700 00:40:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:34.700 00:40:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:26:34.700 00:40:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:26:34.700 00:40:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:34.700 00:40:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:34.700 00:40:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:34.700 00:40:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:34.700 00:40:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:35.034 malloc1 00:26:35.034 00:40:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:35.295 [2024-04-24 00:40:28.867456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:35.295 [2024-04-24 00:40:28.867548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.295 [2024-04-24 00:40:28.867600] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:26:35.295 [2024-04-24 00:40:28.867647] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.295 [2024-04-24 00:40:28.870139] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.295 [2024-04-24 00:40:28.870191] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:35.295 pt1 00:26:35.295 00:40:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:35.295 00:40:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:35.295 00:40:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:26:35.295 00:40:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:26:35.295 00:40:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:35.295 00:40:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:35.295 00:40:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:35.295 00:40:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:35.295 00:40:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:35.554 malloc2 00:26:35.555 00:40:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:35.813 [2024-04-24 00:40:29.455778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:35.813 [2024-04-24 00:40:29.455869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.813 [2024-04-24 00:40:29.455928] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:35.813 [2024-04-24 00:40:29.455981] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.813 [2024-04-24 00:40:29.458350] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.813 [2024-04-24 00:40:29.458403] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:35.813 pt2 00:26:35.813 00:40:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:35.813 00:40:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:35.813 00:40:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:26:35.813 00:40:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:26:35.813 00:40:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:35.813 00:40:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:35.813 00:40:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:35.813 00:40:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:35.813 00:40:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:36.071 malloc3 00:26:36.071 00:40:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:36.071 [2024-04-24 00:40:29.853568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:36.071 [2024-04-24 00:40:29.853656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.071 [2024-04-24 00:40:29.853704] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:26:36.071 [2024-04-24 00:40:29.853755] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.071 [2024-04-24 00:40:29.856204] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.071 [2024-04-24 00:40:29.856259] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:36.071 pt3 00:26:36.331 00:40:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:36.331 00:40:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:36.331 00:40:29 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:26:36.331 [2024-04-24 00:40:30.093673] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:36.331 [2024-04-24 00:40:30.095962] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:36.331 [2024-04-24 00:40:30.096168] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:36.331 [2024-04-24 00:40:30.096400] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:26:36.331 [2024-04-24 00:40:30.096530] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:36.331 [2024-04-24 00:40:30.096697] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:26:36.331 [2024-04-24 00:40:30.102651] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:26:36.331 [2024-04-24 00:40:30.102797] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:26:36.331 [2024-04-24 00:40:30.103110] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:36.331 00:40:30 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:36.331 00:40:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:36.331 00:40:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:36.331 00:40:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:36.331 00:40:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:36.331 00:40:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:36.331 00:40:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:36.331 00:40:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:36.331 00:40:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:36.331 00:40:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:36.598 00:40:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.598 00:40:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.858 00:40:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:36.858 "name": "raid_bdev1", 00:26:36.858 "uuid": "f164fd70-d28a-4d3c-bbbb-429069636ee6", 00:26:36.858 "strip_size_kb": 64, 00:26:36.858 "state": "online", 00:26:36.858 "raid_level": "raid5f", 00:26:36.858 "superblock": true, 00:26:36.858 "num_base_bdevs": 3, 00:26:36.858 "num_base_bdevs_discovered": 3, 00:26:36.858 "num_base_bdevs_operational": 3, 00:26:36.858 "base_bdevs_list": [ 00:26:36.858 { 00:26:36.858 "name": "pt1", 00:26:36.858 "uuid": "44a79298-d4c8-5e7a-97f8-bafb3be7fb95", 00:26:36.858 "is_configured": true, 00:26:36.858 "data_offset": 2048, 00:26:36.858 "data_size": 63488 00:26:36.858 }, 00:26:36.858 { 00:26:36.858 "name": "pt2", 00:26:36.858 "uuid": "eaf86316-c00d-5097-a025-6df684fe1600", 00:26:36.858 "is_configured": true, 00:26:36.858 "data_offset": 2048, 00:26:36.858 "data_size": 63488 00:26:36.858 }, 00:26:36.858 { 00:26:36.858 "name": "pt3", 00:26:36.858 "uuid": "791229ca-1c5a-5cad-8fda-21ec1b78bf97", 00:26:36.858 "is_configured": true, 00:26:36.858 "data_offset": 2048, 00:26:36.858 "data_size": 63488 00:26:36.858 } 00:26:36.858 ] 00:26:36.858 }' 00:26:36.858 00:40:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:36.858 00:40:30 -- common/autotest_common.sh@10 -- # set +x 00:26:37.426 00:40:31 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:26:37.426 00:40:31 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:37.685 [2024-04-24 00:40:31.330455] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:37.685 00:40:31 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f164fd70-d28a-4d3c-bbbb-429069636ee6 00:26:37.685 00:40:31 -- bdev/bdev_raid.sh@380 -- # '[' -z f164fd70-d28a-4d3c-bbbb-429069636ee6 ']' 00:26:37.685 00:40:31 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:37.944 [2024-04-24 00:40:31.538324] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:37.944 [2024-04-24 00:40:31.538543] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:37.944 [2024-04-24 00:40:31.538747] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:37.944 [2024-04-24 00:40:31.538988] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:37.944 [2024-04-24 00:40:31.539094] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:26:37.944 00:40:31 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.944 00:40:31 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:26:38.202 00:40:31 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:26:38.203 00:40:31 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:26:38.203 00:40:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:38.203 00:40:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:38.462 00:40:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:38.462 00:40:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:38.720 00:40:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:38.720 00:40:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:38.977 00:40:32 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:38.977 00:40:32 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:39.237 00:40:32 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:26:39.237 00:40:32 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:39.237 00:40:32 -- common/autotest_common.sh@638 -- # local es=0 00:26:39.237 00:40:32 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:39.237 00:40:32 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:39.237 00:40:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:39.237 00:40:32 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:39.237 00:40:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:39.237 00:40:32 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:39.237 00:40:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:39.237 00:40:32 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:39.237 00:40:32 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:39.237 00:40:32 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:26:39.497 [2024-04-24 00:40:33.106597] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:39.497 [2024-04-24 00:40:33.108880] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:39.497 [2024-04-24 00:40:33.109052] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:39.497 [2024-04-24 00:40:33.109131] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:26:39.497 [2024-04-24 00:40:33.109282] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:26:39.497 [2024-04-24 00:40:33.109340] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:26:39.497 [2024-04-24 00:40:33.109462] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:39.497 [2024-04-24 00:40:33.109496] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:26:39.497 request: 00:26:39.497 { 00:26:39.497 "name": "raid_bdev1", 00:26:39.497 "raid_level": "raid5f", 00:26:39.497 "base_bdevs": [ 00:26:39.497 "malloc1", 00:26:39.497 "malloc2", 00:26:39.497 "malloc3" 00:26:39.497 ], 00:26:39.497 "superblock": false, 00:26:39.497 "strip_size_kb": 64, 00:26:39.497 "method": "bdev_raid_create", 00:26:39.497 "req_id": 1 00:26:39.497 } 00:26:39.497 Got JSON-RPC error response 00:26:39.497 response: 00:26:39.497 { 00:26:39.497 "code": -17, 00:26:39.497 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:39.497 } 00:26:39.497 00:40:33 -- common/autotest_common.sh@641 -- # es=1 00:26:39.497 00:40:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:39.497 00:40:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:39.497 00:40:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:39.497 00:40:33 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.497 00:40:33 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:26:39.756 00:40:33 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:26:39.756 00:40:33 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:26:39.756 00:40:33 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:40.014 [2024-04-24 00:40:33.650626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:40.014 [2024-04-24 00:40:33.650900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:40.014 [2024-04-24 00:40:33.651077] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:26:40.014 [2024-04-24 00:40:33.651177] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:40.014 [2024-04-24 00:40:33.653708] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:40.014 [2024-04-24 00:40:33.653869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:40.014 [2024-04-24 00:40:33.654115] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:26:40.014 [2024-04-24 00:40:33.654291] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:40.014 pt1 00:26:40.014 00:40:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:40.014 00:40:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:40.014 00:40:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:40.014 00:40:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:40.014 00:40:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:40.014 00:40:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:40.014 00:40:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:40.014 00:40:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:40.014 00:40:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:40.014 00:40:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:40.014 00:40:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.014 00:40:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.272 00:40:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:40.272 "name": "raid_bdev1", 00:26:40.272 "uuid": "f164fd70-d28a-4d3c-bbbb-429069636ee6", 00:26:40.272 "strip_size_kb": 64, 00:26:40.272 "state": "configuring", 00:26:40.272 "raid_level": "raid5f", 00:26:40.272 "superblock": true, 00:26:40.272 "num_base_bdevs": 3, 00:26:40.272 "num_base_bdevs_discovered": 1, 00:26:40.272 "num_base_bdevs_operational": 3, 00:26:40.272 "base_bdevs_list": [ 00:26:40.272 { 00:26:40.272 "name": "pt1", 00:26:40.272 "uuid": "44a79298-d4c8-5e7a-97f8-bafb3be7fb95", 00:26:40.272 "is_configured": true, 00:26:40.272 "data_offset": 2048, 00:26:40.272 "data_size": 63488 00:26:40.273 }, 00:26:40.273 { 00:26:40.273 "name": null, 00:26:40.273 "uuid": "eaf86316-c00d-5097-a025-6df684fe1600", 00:26:40.273 "is_configured": false, 00:26:40.273 "data_offset": 2048, 00:26:40.273 "data_size": 63488 00:26:40.273 }, 00:26:40.273 { 00:26:40.273 "name": null, 00:26:40.273 "uuid": "791229ca-1c5a-5cad-8fda-21ec1b78bf97", 00:26:40.273 "is_configured": false, 00:26:40.273 "data_offset": 2048, 00:26:40.273 "data_size": 63488 00:26:40.273 } 00:26:40.273 ] 00:26:40.273 }' 00:26:40.273 00:40:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:40.273 00:40:33 -- common/autotest_common.sh@10 -- # set +x 00:26:40.840 00:40:34 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:26:40.840 00:40:34 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:41.099 [2024-04-24 00:40:34.702919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:41.099 [2024-04-24 00:40:34.703208] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:41.099 [2024-04-24 00:40:34.703366] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:41.099 [2024-04-24 00:40:34.703466] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:41.099 [2024-04-24 00:40:34.704032] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:41.099 [2024-04-24 00:40:34.704179] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:41.099 [2024-04-24 00:40:34.704402] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:41.099 [2024-04-24 00:40:34.704515] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:41.099 pt2 00:26:41.099 00:40:34 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:41.359 [2024-04-24 00:40:34.907041] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:41.359 00:40:34 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:41.359 00:40:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:41.359 00:40:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:41.359 00:40:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:41.359 00:40:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:41.359 00:40:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:41.359 00:40:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:41.359 00:40:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:41.359 00:40:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:41.359 00:40:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:41.359 00:40:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:41.359 00:40:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.641 00:40:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:41.641 "name": "raid_bdev1", 00:26:41.641 "uuid": "f164fd70-d28a-4d3c-bbbb-429069636ee6", 00:26:41.641 "strip_size_kb": 64, 00:26:41.641 "state": "configuring", 00:26:41.641 "raid_level": "raid5f", 00:26:41.641 "superblock": true, 00:26:41.641 "num_base_bdevs": 3, 00:26:41.641 "num_base_bdevs_discovered": 1, 00:26:41.641 "num_base_bdevs_operational": 3, 00:26:41.641 "base_bdevs_list": [ 00:26:41.641 { 00:26:41.641 "name": "pt1", 00:26:41.641 "uuid": "44a79298-d4c8-5e7a-97f8-bafb3be7fb95", 00:26:41.641 "is_configured": true, 00:26:41.641 "data_offset": 2048, 00:26:41.641 "data_size": 63488 00:26:41.641 }, 00:26:41.641 { 00:26:41.641 "name": null, 00:26:41.641 "uuid": "eaf86316-c00d-5097-a025-6df684fe1600", 00:26:41.641 "is_configured": false, 00:26:41.641 "data_offset": 2048, 00:26:41.641 "data_size": 63488 00:26:41.641 }, 00:26:41.641 { 00:26:41.641 "name": null, 00:26:41.641 "uuid": "791229ca-1c5a-5cad-8fda-21ec1b78bf97", 00:26:41.641 "is_configured": false, 00:26:41.641 "data_offset": 2048, 00:26:41.641 "data_size": 63488 00:26:41.641 } 00:26:41.641 ] 00:26:41.641 }' 00:26:41.641 00:40:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:41.641 00:40:35 -- common/autotest_common.sh@10 -- # set +x 00:26:42.206 00:40:35 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:26:42.206 00:40:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:42.206 00:40:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:42.464 [2024-04-24 00:40:36.083385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:42.464 [2024-04-24 00:40:36.083671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.464 [2024-04-24 00:40:36.083748] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:26:42.464 [2024-04-24 00:40:36.083952] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.464 [2024-04-24 00:40:36.084491] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.464 [2024-04-24 00:40:36.084651] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:42.464 [2024-04-24 00:40:36.084886] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:42.464 [2024-04-24 00:40:36.085005] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:42.464 pt2 00:26:42.464 00:40:36 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:42.464 00:40:36 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:42.464 00:40:36 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:42.723 [2024-04-24 00:40:36.287391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:42.723 [2024-04-24 00:40:36.287679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.723 [2024-04-24 00:40:36.287816] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:26:42.723 [2024-04-24 00:40:36.287923] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.723 [2024-04-24 00:40:36.288531] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.723 [2024-04-24 00:40:36.288699] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:42.723 [2024-04-24 00:40:36.288969] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:42.723 [2024-04-24 00:40:36.289080] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:42.723 [2024-04-24 00:40:36.289253] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:26:42.723 [2024-04-24 00:40:36.289337] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:42.723 [2024-04-24 00:40:36.289546] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:42.723 [2024-04-24 00:40:36.295420] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:26:42.723 [2024-04-24 00:40:36.295541] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:26:42.723 [2024-04-24 00:40:36.295824] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:42.723 pt3 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.723 00:40:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:42.983 00:40:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:42.983 "name": "raid_bdev1", 00:26:42.983 "uuid": "f164fd70-d28a-4d3c-bbbb-429069636ee6", 00:26:42.983 "strip_size_kb": 64, 00:26:42.983 "state": "online", 00:26:42.983 "raid_level": "raid5f", 00:26:42.983 "superblock": true, 00:26:42.983 "num_base_bdevs": 3, 00:26:42.983 "num_base_bdevs_discovered": 3, 00:26:42.983 "num_base_bdevs_operational": 3, 00:26:42.983 "base_bdevs_list": [ 00:26:42.983 { 00:26:42.983 "name": "pt1", 00:26:42.983 "uuid": "44a79298-d4c8-5e7a-97f8-bafb3be7fb95", 00:26:42.983 "is_configured": true, 00:26:42.983 "data_offset": 2048, 00:26:42.983 "data_size": 63488 00:26:42.983 }, 00:26:42.983 { 00:26:42.983 "name": "pt2", 00:26:42.983 "uuid": "eaf86316-c00d-5097-a025-6df684fe1600", 00:26:42.983 "is_configured": true, 00:26:42.983 "data_offset": 2048, 00:26:42.983 "data_size": 63488 00:26:42.983 }, 00:26:42.983 { 00:26:42.983 "name": "pt3", 00:26:42.983 "uuid": "791229ca-1c5a-5cad-8fda-21ec1b78bf97", 00:26:42.983 "is_configured": true, 00:26:42.983 "data_offset": 2048, 00:26:42.983 "data_size": 63488 00:26:42.983 } 00:26:42.983 ] 00:26:42.983 }' 00:26:42.983 00:40:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:42.983 00:40:36 -- common/autotest_common.sh@10 -- # set +x 00:26:43.549 00:40:37 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:43.549 00:40:37 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:26:43.827 [2024-04-24 00:40:37.495647] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:43.827 00:40:37 -- bdev/bdev_raid.sh@430 -- # '[' f164fd70-d28a-4d3c-bbbb-429069636ee6 '!=' f164fd70-d28a-4d3c-bbbb-429069636ee6 ']' 00:26:43.827 00:40:37 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:26:43.827 00:40:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:43.827 00:40:37 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:43.827 00:40:37 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:44.087 [2024-04-24 00:40:37.759648] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:44.087 00:40:37 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:44.087 00:40:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:44.087 00:40:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:44.087 00:40:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:44.087 00:40:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:44.087 00:40:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:44.087 00:40:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:44.087 00:40:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:44.087 00:40:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:44.087 00:40:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:44.087 00:40:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.087 00:40:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.345 00:40:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:44.345 "name": "raid_bdev1", 00:26:44.345 "uuid": "f164fd70-d28a-4d3c-bbbb-429069636ee6", 00:26:44.345 "strip_size_kb": 64, 00:26:44.345 "state": "online", 00:26:44.345 "raid_level": "raid5f", 00:26:44.345 "superblock": true, 00:26:44.345 "num_base_bdevs": 3, 00:26:44.345 "num_base_bdevs_discovered": 2, 00:26:44.345 "num_base_bdevs_operational": 2, 00:26:44.345 "base_bdevs_list": [ 00:26:44.345 { 00:26:44.345 "name": null, 00:26:44.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.345 "is_configured": false, 00:26:44.345 "data_offset": 2048, 00:26:44.345 "data_size": 63488 00:26:44.345 }, 00:26:44.345 { 00:26:44.345 "name": "pt2", 00:26:44.345 "uuid": "eaf86316-c00d-5097-a025-6df684fe1600", 00:26:44.345 "is_configured": true, 00:26:44.345 "data_offset": 2048, 00:26:44.345 "data_size": 63488 00:26:44.345 }, 00:26:44.345 { 00:26:44.345 "name": "pt3", 00:26:44.345 "uuid": "791229ca-1c5a-5cad-8fda-21ec1b78bf97", 00:26:44.345 "is_configured": true, 00:26:44.345 "data_offset": 2048, 00:26:44.345 "data_size": 63488 00:26:44.345 } 00:26:44.345 ] 00:26:44.345 }' 00:26:44.345 00:40:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:44.345 00:40:38 -- common/autotest_common.sh@10 -- # set +x 00:26:44.911 00:40:38 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:45.169 [2024-04-24 00:40:38.871844] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:45.169 [2024-04-24 00:40:38.872065] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:45.169 [2024-04-24 00:40:38.872213] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:45.169 [2024-04-24 00:40:38.872363] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:45.169 [2024-04-24 00:40:38.872442] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:26:45.169 00:40:38 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.169 00:40:38 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:26:45.426 00:40:39 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:26:45.426 00:40:39 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:26:45.426 00:40:39 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:26:45.426 00:40:39 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:45.426 00:40:39 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:45.712 00:40:39 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:26:45.712 00:40:39 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:45.712 00:40:39 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:45.972 00:40:39 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:26:45.972 00:40:39 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:45.972 00:40:39 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:26:45.972 00:40:39 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:45.972 00:40:39 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:46.231 [2024-04-24 00:40:39.804027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:46.231 [2024-04-24 00:40:39.804352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:46.231 [2024-04-24 00:40:39.804481] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:26:46.231 [2024-04-24 00:40:39.804589] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:46.231 [2024-04-24 00:40:39.807140] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:46.231 [2024-04-24 00:40:39.807317] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:46.231 [2024-04-24 00:40:39.807548] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:46.231 [2024-04-24 00:40:39.807705] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:46.231 pt2 00:26:46.231 00:40:39 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:26:46.231 00:40:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:46.231 00:40:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:46.231 00:40:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:46.231 00:40:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:46.231 00:40:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:46.231 00:40:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:46.231 00:40:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:46.231 00:40:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:46.231 00:40:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:46.231 00:40:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.231 00:40:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.489 00:40:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:46.489 "name": "raid_bdev1", 00:26:46.490 "uuid": "f164fd70-d28a-4d3c-bbbb-429069636ee6", 00:26:46.490 "strip_size_kb": 64, 00:26:46.490 "state": "configuring", 00:26:46.490 "raid_level": "raid5f", 00:26:46.490 "superblock": true, 00:26:46.490 "num_base_bdevs": 3, 00:26:46.490 "num_base_bdevs_discovered": 1, 00:26:46.490 "num_base_bdevs_operational": 2, 00:26:46.490 "base_bdevs_list": [ 00:26:46.490 { 00:26:46.490 "name": null, 00:26:46.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.490 "is_configured": false, 00:26:46.490 "data_offset": 2048, 00:26:46.490 "data_size": 63488 00:26:46.490 }, 00:26:46.490 { 00:26:46.490 "name": "pt2", 00:26:46.490 "uuid": "eaf86316-c00d-5097-a025-6df684fe1600", 00:26:46.490 "is_configured": true, 00:26:46.490 "data_offset": 2048, 00:26:46.490 "data_size": 63488 00:26:46.490 }, 00:26:46.490 { 00:26:46.490 "name": null, 00:26:46.490 "uuid": "791229ca-1c5a-5cad-8fda-21ec1b78bf97", 00:26:46.490 "is_configured": false, 00:26:46.490 "data_offset": 2048, 00:26:46.490 "data_size": 63488 00:26:46.490 } 00:26:46.490 ] 00:26:46.490 }' 00:26:46.490 00:40:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:46.490 00:40:40 -- common/autotest_common.sh@10 -- # set +x 00:26:47.056 00:40:40 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:26:47.056 00:40:40 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:47.056 00:40:40 -- bdev/bdev_raid.sh@462 -- # i=2 00:26:47.056 00:40:40 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:47.339 [2024-04-24 00:40:40.996337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:47.339 [2024-04-24 00:40:40.996644] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:47.339 [2024-04-24 00:40:40.996726] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:47.339 [2024-04-24 00:40:40.996834] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:47.339 [2024-04-24 00:40:40.997369] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:47.339 [2024-04-24 00:40:40.997535] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:47.339 [2024-04-24 00:40:40.997762] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:47.339 [2024-04-24 00:40:40.997875] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:47.339 [2024-04-24 00:40:40.998116] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:26:47.339 [2024-04-24 00:40:40.998243] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:47.339 [2024-04-24 00:40:40.998381] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:47.339 [2024-04-24 00:40:41.004198] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:26:47.339 [2024-04-24 00:40:41.004331] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:26:47.339 [2024-04-24 00:40:41.004734] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:47.339 pt3 00:26:47.339 00:40:41 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:47.339 00:40:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:47.339 00:40:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:47.339 00:40:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:47.339 00:40:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:47.339 00:40:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:47.339 00:40:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:47.339 00:40:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:47.339 00:40:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:47.339 00:40:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:47.339 00:40:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.339 00:40:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.597 00:40:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:47.597 "name": "raid_bdev1", 00:26:47.597 "uuid": "f164fd70-d28a-4d3c-bbbb-429069636ee6", 00:26:47.597 "strip_size_kb": 64, 00:26:47.597 "state": "online", 00:26:47.597 "raid_level": "raid5f", 00:26:47.597 "superblock": true, 00:26:47.597 "num_base_bdevs": 3, 00:26:47.597 "num_base_bdevs_discovered": 2, 00:26:47.597 "num_base_bdevs_operational": 2, 00:26:47.597 "base_bdevs_list": [ 00:26:47.597 { 00:26:47.597 "name": null, 00:26:47.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.597 "is_configured": false, 00:26:47.598 "data_offset": 2048, 00:26:47.598 "data_size": 63488 00:26:47.598 }, 00:26:47.598 { 00:26:47.598 "name": "pt2", 00:26:47.598 "uuid": "eaf86316-c00d-5097-a025-6df684fe1600", 00:26:47.598 "is_configured": true, 00:26:47.598 "data_offset": 2048, 00:26:47.598 "data_size": 63488 00:26:47.598 }, 00:26:47.598 { 00:26:47.598 "name": "pt3", 00:26:47.598 "uuid": "791229ca-1c5a-5cad-8fda-21ec1b78bf97", 00:26:47.598 "is_configured": true, 00:26:47.598 "data_offset": 2048, 00:26:47.598 "data_size": 63488 00:26:47.598 } 00:26:47.598 ] 00:26:47.598 }' 00:26:47.598 00:40:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:47.598 00:40:41 -- common/autotest_common.sh@10 -- # set +x 00:26:48.163 00:40:41 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:26:48.163 00:40:41 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:48.422 [2024-04-24 00:40:42.184856] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:48.422 [2024-04-24 00:40:42.185055] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:48.422 [2024-04-24 00:40:42.185267] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:48.422 [2024-04-24 00:40:42.185357] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:48.422 [2024-04-24 00:40:42.185445] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:26:48.681 00:40:42 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.681 00:40:42 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:26:48.681 00:40:42 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:26:48.681 00:40:42 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:26:48.681 00:40:42 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:48.940 [2024-04-24 00:40:42.632915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:48.940 [2024-04-24 00:40:42.633184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:48.940 [2024-04-24 00:40:42.633263] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:26:48.940 [2024-04-24 00:40:42.633386] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:48.940 [2024-04-24 00:40:42.636115] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:48.940 [2024-04-24 00:40:42.636281] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:48.940 [2024-04-24 00:40:42.636508] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:26:48.940 [2024-04-24 00:40:42.636666] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:48.940 pt1 00:26:48.940 00:40:42 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:48.940 00:40:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:48.940 00:40:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:48.940 00:40:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:48.940 00:40:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:48.940 00:40:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:48.940 00:40:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:48.940 00:40:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:48.940 00:40:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:48.940 00:40:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:48.940 00:40:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.940 00:40:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.199 00:40:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:49.199 "name": "raid_bdev1", 00:26:49.199 "uuid": "f164fd70-d28a-4d3c-bbbb-429069636ee6", 00:26:49.199 "strip_size_kb": 64, 00:26:49.199 "state": "configuring", 00:26:49.199 "raid_level": "raid5f", 00:26:49.199 "superblock": true, 00:26:49.199 "num_base_bdevs": 3, 00:26:49.199 "num_base_bdevs_discovered": 1, 00:26:49.199 "num_base_bdevs_operational": 3, 00:26:49.199 "base_bdevs_list": [ 00:26:49.199 { 00:26:49.199 "name": "pt1", 00:26:49.199 "uuid": "44a79298-d4c8-5e7a-97f8-bafb3be7fb95", 00:26:49.199 "is_configured": true, 00:26:49.199 "data_offset": 2048, 00:26:49.199 "data_size": 63488 00:26:49.199 }, 00:26:49.199 { 00:26:49.199 "name": null, 00:26:49.199 "uuid": "eaf86316-c00d-5097-a025-6df684fe1600", 00:26:49.199 "is_configured": false, 00:26:49.199 "data_offset": 2048, 00:26:49.199 "data_size": 63488 00:26:49.199 }, 00:26:49.199 { 00:26:49.199 "name": null, 00:26:49.199 "uuid": "791229ca-1c5a-5cad-8fda-21ec1b78bf97", 00:26:49.199 "is_configured": false, 00:26:49.199 "data_offset": 2048, 00:26:49.200 "data_size": 63488 00:26:49.200 } 00:26:49.200 ] 00:26:49.200 }' 00:26:49.200 00:40:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:49.200 00:40:42 -- common/autotest_common.sh@10 -- # set +x 00:26:49.766 00:40:43 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:26:49.766 00:40:43 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:49.766 00:40:43 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:50.024 00:40:43 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:50.024 00:40:43 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:50.024 00:40:43 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:50.283 00:40:43 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:50.283 00:40:43 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:50.283 00:40:43 -- bdev/bdev_raid.sh@489 -- # i=2 00:26:50.283 00:40:43 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:50.283 [2024-04-24 00:40:44.005236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:50.283 [2024-04-24 00:40:44.005458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:50.283 [2024-04-24 00:40:44.005567] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:50.283 [2024-04-24 00:40:44.005667] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:50.283 [2024-04-24 00:40:44.006148] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:50.283 [2024-04-24 00:40:44.006292] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:50.283 [2024-04-24 00:40:44.006498] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:50.283 [2024-04-24 00:40:44.006593] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:50.283 [2024-04-24 00:40:44.006665] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:50.283 [2024-04-24 00:40:44.006740] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state configuring 00:26:50.283 [2024-04-24 00:40:44.006894] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:50.283 pt3 00:26:50.283 00:40:44 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:26:50.283 00:40:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:50.283 00:40:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:50.283 00:40:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:50.283 00:40:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:50.283 00:40:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:50.283 00:40:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:50.283 00:40:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:50.283 00:40:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:50.283 00:40:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:50.283 00:40:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.283 00:40:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.542 00:40:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:50.542 "name": "raid_bdev1", 00:26:50.542 "uuid": "f164fd70-d28a-4d3c-bbbb-429069636ee6", 00:26:50.542 "strip_size_kb": 64, 00:26:50.542 "state": "configuring", 00:26:50.542 "raid_level": "raid5f", 00:26:50.542 "superblock": true, 00:26:50.542 "num_base_bdevs": 3, 00:26:50.542 "num_base_bdevs_discovered": 1, 00:26:50.542 "num_base_bdevs_operational": 2, 00:26:50.542 "base_bdevs_list": [ 00:26:50.542 { 00:26:50.542 "name": null, 00:26:50.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.542 "is_configured": false, 00:26:50.542 "data_offset": 2048, 00:26:50.542 "data_size": 63488 00:26:50.542 }, 00:26:50.542 { 00:26:50.542 "name": null, 00:26:50.542 "uuid": "eaf86316-c00d-5097-a025-6df684fe1600", 00:26:50.542 "is_configured": false, 00:26:50.542 "data_offset": 2048, 00:26:50.542 "data_size": 63488 00:26:50.542 }, 00:26:50.542 { 00:26:50.542 "name": "pt3", 00:26:50.542 "uuid": "791229ca-1c5a-5cad-8fda-21ec1b78bf97", 00:26:50.542 "is_configured": true, 00:26:50.542 "data_offset": 2048, 00:26:50.542 "data_size": 63488 00:26:50.542 } 00:26:50.542 ] 00:26:50.542 }' 00:26:50.542 00:40:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:50.542 00:40:44 -- common/autotest_common.sh@10 -- # set +x 00:26:51.114 00:40:44 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:26:51.114 00:40:44 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:51.114 00:40:44 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:51.419 [2024-04-24 00:40:45.002915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:51.419 [2024-04-24 00:40:45.003244] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:51.419 [2024-04-24 00:40:45.003325] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:51.419 [2024-04-24 00:40:45.003452] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:51.419 [2024-04-24 00:40:45.004039] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:51.419 [2024-04-24 00:40:45.004208] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:51.419 [2024-04-24 00:40:45.004422] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:51.419 [2024-04-24 00:40:45.004565] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:51.419 [2024-04-24 00:40:45.004782] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:26:51.419 [2024-04-24 00:40:45.004884] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:51.419 [2024-04-24 00:40:45.005017] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:26:51.419 [2024-04-24 00:40:45.010559] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:26:51.419 [2024-04-24 00:40:45.010686] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011f80 00:26:51.419 [2024-04-24 00:40:45.011054] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:51.419 pt2 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.419 00:40:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.679 00:40:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:51.679 "name": "raid_bdev1", 00:26:51.679 "uuid": "f164fd70-d28a-4d3c-bbbb-429069636ee6", 00:26:51.679 "strip_size_kb": 64, 00:26:51.679 "state": "online", 00:26:51.679 "raid_level": "raid5f", 00:26:51.679 "superblock": true, 00:26:51.679 "num_base_bdevs": 3, 00:26:51.679 "num_base_bdevs_discovered": 2, 00:26:51.679 "num_base_bdevs_operational": 2, 00:26:51.679 "base_bdevs_list": [ 00:26:51.679 { 00:26:51.679 "name": null, 00:26:51.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.679 "is_configured": false, 00:26:51.679 "data_offset": 2048, 00:26:51.679 "data_size": 63488 00:26:51.679 }, 00:26:51.679 { 00:26:51.679 "name": "pt2", 00:26:51.679 "uuid": "eaf86316-c00d-5097-a025-6df684fe1600", 00:26:51.679 "is_configured": true, 00:26:51.679 "data_offset": 2048, 00:26:51.679 "data_size": 63488 00:26:51.679 }, 00:26:51.679 { 00:26:51.679 "name": "pt3", 00:26:51.679 "uuid": "791229ca-1c5a-5cad-8fda-21ec1b78bf97", 00:26:51.679 "is_configured": true, 00:26:51.679 "data_offset": 2048, 00:26:51.679 "data_size": 63488 00:26:51.679 } 00:26:51.679 ] 00:26:51.679 }' 00:26:51.679 00:40:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:51.679 00:40:45 -- common/autotest_common.sh@10 -- # set +x 00:26:52.249 00:40:45 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:52.249 00:40:45 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:26:52.508 [2024-04-24 00:40:46.227708] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:52.508 00:40:46 -- bdev/bdev_raid.sh@506 -- # '[' f164fd70-d28a-4d3c-bbbb-429069636ee6 '!=' f164fd70-d28a-4d3c-bbbb-429069636ee6 ']' 00:26:52.508 00:40:46 -- bdev/bdev_raid.sh@511 -- # killprocess 136801 00:26:52.508 00:40:46 -- common/autotest_common.sh@936 -- # '[' -z 136801 ']' 00:26:52.508 00:40:46 -- common/autotest_common.sh@940 -- # kill -0 136801 00:26:52.508 00:40:46 -- common/autotest_common.sh@941 -- # uname 00:26:52.508 00:40:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:52.508 00:40:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136801 00:26:52.508 killing process with pid 136801 00:26:52.508 00:40:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:52.508 00:40:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:52.508 00:40:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136801' 00:26:52.508 00:40:46 -- common/autotest_common.sh@955 -- # kill 136801 00:26:52.508 00:40:46 -- common/autotest_common.sh@960 -- # wait 136801 00:26:52.508 [2024-04-24 00:40:46.275468] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:52.508 [2024-04-24 00:40:46.275564] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:52.508 [2024-04-24 00:40:46.275639] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:52.508 [2024-04-24 00:40:46.275651] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state offline 00:26:53.076 [2024-04-24 00:40:46.578004] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:54.523 ************************************ 00:26:54.523 END TEST raid5f_superblock_test 00:26:54.523 ************************************ 00:26:54.523 00:40:47 -- bdev/bdev_raid.sh@513 -- # return 0 00:26:54.523 00:26:54.523 real 0m20.652s 00:26:54.523 user 0m36.669s 00:26:54.523 sys 0m3.104s 00:26:54.523 00:40:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:54.523 00:40:47 -- common/autotest_common.sh@10 -- # set +x 00:26:54.523 00:40:47 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:26:54.523 00:40:47 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:26:54.523 00:40:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:26:54.523 00:40:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:54.523 00:40:47 -- common/autotest_common.sh@10 -- # set +x 00:26:54.523 ************************************ 00:26:54.523 START TEST raid5f_rebuild_test 00:26:54.523 ************************************ 00:26:54.523 00:40:48 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 3 false false 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@544 -- # raid_pid=137430 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137430 /var/tmp/spdk-raid.sock 00:26:54.523 00:40:48 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:54.523 00:40:48 -- common/autotest_common.sh@817 -- # '[' -z 137430 ']' 00:26:54.523 00:40:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:54.523 00:40:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:54.523 00:40:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:54.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:54.523 00:40:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:54.523 00:40:48 -- common/autotest_common.sh@10 -- # set +x 00:26:54.523 [2024-04-24 00:40:48.118015] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:26:54.523 [2024-04-24 00:40:48.118518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137430 ] 00:26:54.523 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:54.523 Zero copy mechanism will not be used. 00:26:54.523 [2024-04-24 00:40:48.309501] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.782 [2024-04-24 00:40:48.519341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.040 [2024-04-24 00:40:48.737730] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:55.297 00:40:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:55.297 00:40:49 -- common/autotest_common.sh@850 -- # return 0 00:26:55.297 00:40:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:55.297 00:40:49 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:55.297 00:40:49 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:55.555 BaseBdev1 00:26:55.555 00:40:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:55.555 00:40:49 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:55.555 00:40:49 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:56.122 BaseBdev2 00:26:56.122 00:40:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:56.122 00:40:49 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:56.122 00:40:49 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:56.380 BaseBdev3 00:26:56.380 00:40:49 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:56.638 spare_malloc 00:26:56.638 00:40:50 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:56.898 spare_delay 00:26:56.898 00:40:50 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:57.155 [2024-04-24 00:40:50.760256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:57.155 [2024-04-24 00:40:50.760540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:57.155 [2024-04-24 00:40:50.760624] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:26:57.155 [2024-04-24 00:40:50.760751] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:57.155 [2024-04-24 00:40:50.763407] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:57.155 [2024-04-24 00:40:50.763595] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:57.155 spare 00:26:57.155 00:40:50 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:26:57.413 [2024-04-24 00:40:51.040387] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:57.413 [2024-04-24 00:40:51.042770] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:57.413 [2024-04-24 00:40:51.042973] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:57.413 [2024-04-24 00:40:51.043110] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:26:57.413 [2024-04-24 00:40:51.043265] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:57.413 [2024-04-24 00:40:51.043524] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:26:57.413 [2024-04-24 00:40:51.050379] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:26:57.413 [2024-04-24 00:40:51.050516] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:26:57.413 [2024-04-24 00:40:51.050882] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:57.413 00:40:51 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:57.413 00:40:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:57.413 00:40:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:57.413 00:40:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:57.413 00:40:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:57.413 00:40:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:57.413 00:40:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:57.413 00:40:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:57.413 00:40:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:57.413 00:40:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:57.413 00:40:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.413 00:40:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.671 00:40:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:57.671 "name": "raid_bdev1", 00:26:57.671 "uuid": "8c0a79e1-181e-4fd7-8887-6bb356239aab", 00:26:57.671 "strip_size_kb": 64, 00:26:57.671 "state": "online", 00:26:57.671 "raid_level": "raid5f", 00:26:57.671 "superblock": false, 00:26:57.671 "num_base_bdevs": 3, 00:26:57.671 "num_base_bdevs_discovered": 3, 00:26:57.671 "num_base_bdevs_operational": 3, 00:26:57.671 "base_bdevs_list": [ 00:26:57.671 { 00:26:57.671 "name": "BaseBdev1", 00:26:57.671 "uuid": "4009ecf9-bf51-4788-be5c-144110518232", 00:26:57.671 "is_configured": true, 00:26:57.671 "data_offset": 0, 00:26:57.671 "data_size": 65536 00:26:57.671 }, 00:26:57.671 { 00:26:57.671 "name": "BaseBdev2", 00:26:57.671 "uuid": "92219f0c-8473-4034-be64-614339ed8587", 00:26:57.671 "is_configured": true, 00:26:57.671 "data_offset": 0, 00:26:57.671 "data_size": 65536 00:26:57.671 }, 00:26:57.671 { 00:26:57.671 "name": "BaseBdev3", 00:26:57.671 "uuid": "0b5f237a-67de-43f7-94cc-e97183c0c7ec", 00:26:57.671 "is_configured": true, 00:26:57.671 "data_offset": 0, 00:26:57.671 "data_size": 65536 00:26:57.671 } 00:26:57.671 ] 00:26:57.671 }' 00:26:57.671 00:40:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:57.671 00:40:51 -- common/autotest_common.sh@10 -- # set +x 00:26:58.236 00:40:52 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:58.236 00:40:52 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:26:58.493 [2024-04-24 00:40:52.279258] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:58.752 00:40:52 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:26:58.752 00:40:52 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.752 00:40:52 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:59.009 00:40:52 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:26:59.009 00:40:52 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:26:59.009 00:40:52 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:26:59.009 00:40:52 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:59.009 00:40:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:59.009 00:40:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:59.009 00:40:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:59.009 00:40:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:59.009 00:40:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:59.009 00:40:52 -- bdev/nbd_common.sh@12 -- # local i 00:26:59.009 00:40:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:59.009 00:40:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:59.009 00:40:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:59.267 [2024-04-24 00:40:52.895295] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:26:59.267 /dev/nbd0 00:26:59.267 00:40:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:59.267 00:40:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:59.267 00:40:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:26:59.267 00:40:52 -- common/autotest_common.sh@855 -- # local i 00:26:59.267 00:40:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:26:59.267 00:40:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:26:59.267 00:40:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:26:59.267 00:40:52 -- common/autotest_common.sh@859 -- # break 00:26:59.267 00:40:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:26:59.267 00:40:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:26:59.267 00:40:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:59.267 1+0 records in 00:26:59.267 1+0 records out 00:26:59.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000773528 s, 5.3 MB/s 00:26:59.267 00:40:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.267 00:40:52 -- common/autotest_common.sh@872 -- # size=4096 00:26:59.267 00:40:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:59.267 00:40:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:26:59.267 00:40:52 -- common/autotest_common.sh@875 -- # return 0 00:26:59.267 00:40:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:59.267 00:40:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:59.267 00:40:52 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:26:59.267 00:40:52 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:26:59.267 00:40:52 -- bdev/bdev_raid.sh@582 -- # echo 128 00:26:59.267 00:40:52 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:26:59.832 512+0 records in 00:26:59.832 512+0 records out 00:26:59.832 67108864 bytes (67 MB, 64 MiB) copied, 0.437149 s, 154 MB/s 00:26:59.832 00:40:53 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:59.832 00:40:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:59.832 00:40:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:59.832 00:40:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:59.832 00:40:53 -- bdev/nbd_common.sh@51 -- # local i 00:26:59.832 00:40:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:59.832 00:40:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:00.090 [2024-04-24 00:40:53.785828] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:00.090 00:40:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:00.090 00:40:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:00.090 00:40:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:00.090 00:40:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:00.090 00:40:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:00.090 00:40:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:00.090 00:40:53 -- bdev/nbd_common.sh@41 -- # break 00:27:00.090 00:40:53 -- bdev/nbd_common.sh@45 -- # return 0 00:27:00.090 00:40:53 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:00.349 [2024-04-24 00:40:53.998869] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:00.349 00:40:54 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:00.349 00:40:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:00.349 00:40:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:00.349 00:40:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:00.349 00:40:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:00.349 00:40:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:00.349 00:40:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:00.349 00:40:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:00.349 00:40:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:00.349 00:40:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:00.349 00:40:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.349 00:40:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.606 00:40:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:00.606 "name": "raid_bdev1", 00:27:00.606 "uuid": "8c0a79e1-181e-4fd7-8887-6bb356239aab", 00:27:00.606 "strip_size_kb": 64, 00:27:00.606 "state": "online", 00:27:00.606 "raid_level": "raid5f", 00:27:00.606 "superblock": false, 00:27:00.606 "num_base_bdevs": 3, 00:27:00.606 "num_base_bdevs_discovered": 2, 00:27:00.606 "num_base_bdevs_operational": 2, 00:27:00.606 "base_bdevs_list": [ 00:27:00.606 { 00:27:00.606 "name": null, 00:27:00.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.606 "is_configured": false, 00:27:00.606 "data_offset": 0, 00:27:00.606 "data_size": 65536 00:27:00.606 }, 00:27:00.606 { 00:27:00.606 "name": "BaseBdev2", 00:27:00.606 "uuid": "92219f0c-8473-4034-be64-614339ed8587", 00:27:00.606 "is_configured": true, 00:27:00.606 "data_offset": 0, 00:27:00.606 "data_size": 65536 00:27:00.606 }, 00:27:00.606 { 00:27:00.606 "name": "BaseBdev3", 00:27:00.606 "uuid": "0b5f237a-67de-43f7-94cc-e97183c0c7ec", 00:27:00.606 "is_configured": true, 00:27:00.606 "data_offset": 0, 00:27:00.606 "data_size": 65536 00:27:00.606 } 00:27:00.606 ] 00:27:00.606 }' 00:27:00.606 00:40:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:00.606 00:40:54 -- common/autotest_common.sh@10 -- # set +x 00:27:01.171 00:40:54 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:01.429 [2024-04-24 00:40:55.063187] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:01.429 [2024-04-24 00:40:55.063441] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:01.429 [2024-04-24 00:40:55.080999] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:27:01.429 [2024-04-24 00:40:55.090219] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:01.429 00:40:55 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:27:02.368 00:40:56 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:02.368 00:40:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:02.368 00:40:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:02.368 00:40:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:02.368 00:40:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:02.368 00:40:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.368 00:40:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.626 00:40:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:02.626 "name": "raid_bdev1", 00:27:02.626 "uuid": "8c0a79e1-181e-4fd7-8887-6bb356239aab", 00:27:02.626 "strip_size_kb": 64, 00:27:02.626 "state": "online", 00:27:02.626 "raid_level": "raid5f", 00:27:02.626 "superblock": false, 00:27:02.626 "num_base_bdevs": 3, 00:27:02.626 "num_base_bdevs_discovered": 3, 00:27:02.626 "num_base_bdevs_operational": 3, 00:27:02.626 "process": { 00:27:02.626 "type": "rebuild", 00:27:02.626 "target": "spare", 00:27:02.626 "progress": { 00:27:02.626 "blocks": 24576, 00:27:02.626 "percent": 18 00:27:02.626 } 00:27:02.626 }, 00:27:02.626 "base_bdevs_list": [ 00:27:02.626 { 00:27:02.626 "name": "spare", 00:27:02.626 "uuid": "fa6ee191-8660-5ffd-b470-01221ba90c1d", 00:27:02.626 "is_configured": true, 00:27:02.626 "data_offset": 0, 00:27:02.626 "data_size": 65536 00:27:02.626 }, 00:27:02.626 { 00:27:02.626 "name": "BaseBdev2", 00:27:02.626 "uuid": "92219f0c-8473-4034-be64-614339ed8587", 00:27:02.626 "is_configured": true, 00:27:02.626 "data_offset": 0, 00:27:02.626 "data_size": 65536 00:27:02.626 }, 00:27:02.626 { 00:27:02.626 "name": "BaseBdev3", 00:27:02.626 "uuid": "0b5f237a-67de-43f7-94cc-e97183c0c7ec", 00:27:02.626 "is_configured": true, 00:27:02.626 "data_offset": 0, 00:27:02.626 "data_size": 65536 00:27:02.626 } 00:27:02.626 ] 00:27:02.626 }' 00:27:02.626 00:40:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:02.884 00:40:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:02.884 00:40:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:02.884 00:40:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:02.884 00:40:56 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:03.142 [2024-04-24 00:40:56.717149] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:03.142 [2024-04-24 00:40:56.807268] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:03.142 [2024-04-24 00:40:56.807537] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:03.142 00:40:56 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:03.142 00:40:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:03.142 00:40:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:03.142 00:40:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:03.142 00:40:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:03.142 00:40:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:03.142 00:40:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:03.142 00:40:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:03.142 00:40:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:03.142 00:40:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:03.142 00:40:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.142 00:40:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.400 00:40:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:03.400 "name": "raid_bdev1", 00:27:03.400 "uuid": "8c0a79e1-181e-4fd7-8887-6bb356239aab", 00:27:03.400 "strip_size_kb": 64, 00:27:03.400 "state": "online", 00:27:03.400 "raid_level": "raid5f", 00:27:03.400 "superblock": false, 00:27:03.400 "num_base_bdevs": 3, 00:27:03.400 "num_base_bdevs_discovered": 2, 00:27:03.400 "num_base_bdevs_operational": 2, 00:27:03.400 "base_bdevs_list": [ 00:27:03.400 { 00:27:03.400 "name": null, 00:27:03.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.400 "is_configured": false, 00:27:03.400 "data_offset": 0, 00:27:03.400 "data_size": 65536 00:27:03.400 }, 00:27:03.400 { 00:27:03.400 "name": "BaseBdev2", 00:27:03.400 "uuid": "92219f0c-8473-4034-be64-614339ed8587", 00:27:03.400 "is_configured": true, 00:27:03.400 "data_offset": 0, 00:27:03.400 "data_size": 65536 00:27:03.400 }, 00:27:03.400 { 00:27:03.400 "name": "BaseBdev3", 00:27:03.400 "uuid": "0b5f237a-67de-43f7-94cc-e97183c0c7ec", 00:27:03.400 "is_configured": true, 00:27:03.400 "data_offset": 0, 00:27:03.400 "data_size": 65536 00:27:03.400 } 00:27:03.400 ] 00:27:03.400 }' 00:27:03.400 00:40:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:03.400 00:40:57 -- common/autotest_common.sh@10 -- # set +x 00:27:03.967 00:40:57 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:03.967 00:40:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:03.967 00:40:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:03.967 00:40:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:03.967 00:40:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:03.967 00:40:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.967 00:40:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.224 00:40:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:04.224 "name": "raid_bdev1", 00:27:04.224 "uuid": "8c0a79e1-181e-4fd7-8887-6bb356239aab", 00:27:04.224 "strip_size_kb": 64, 00:27:04.224 "state": "online", 00:27:04.224 "raid_level": "raid5f", 00:27:04.224 "superblock": false, 00:27:04.224 "num_base_bdevs": 3, 00:27:04.224 "num_base_bdevs_discovered": 2, 00:27:04.224 "num_base_bdevs_operational": 2, 00:27:04.224 "base_bdevs_list": [ 00:27:04.224 { 00:27:04.224 "name": null, 00:27:04.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.224 "is_configured": false, 00:27:04.224 "data_offset": 0, 00:27:04.224 "data_size": 65536 00:27:04.224 }, 00:27:04.224 { 00:27:04.224 "name": "BaseBdev2", 00:27:04.224 "uuid": "92219f0c-8473-4034-be64-614339ed8587", 00:27:04.224 "is_configured": true, 00:27:04.224 "data_offset": 0, 00:27:04.224 "data_size": 65536 00:27:04.224 }, 00:27:04.224 { 00:27:04.224 "name": "BaseBdev3", 00:27:04.224 "uuid": "0b5f237a-67de-43f7-94cc-e97183c0c7ec", 00:27:04.225 "is_configured": true, 00:27:04.225 "data_offset": 0, 00:27:04.225 "data_size": 65536 00:27:04.225 } 00:27:04.225 ] 00:27:04.225 }' 00:27:04.225 00:40:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:04.482 00:40:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:04.482 00:40:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:04.482 00:40:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:04.482 00:40:58 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:04.741 [2024-04-24 00:40:58.280631] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:04.741 [2024-04-24 00:40:58.280890] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:04.741 [2024-04-24 00:40:58.298947] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:27:04.741 [2024-04-24 00:40:58.308298] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:04.741 00:40:58 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:27:05.670 00:40:59 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:05.670 00:40:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:05.670 00:40:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:05.670 00:40:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:05.670 00:40:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:05.670 00:40:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.670 00:40:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:05.928 "name": "raid_bdev1", 00:27:05.928 "uuid": "8c0a79e1-181e-4fd7-8887-6bb356239aab", 00:27:05.928 "strip_size_kb": 64, 00:27:05.928 "state": "online", 00:27:05.928 "raid_level": "raid5f", 00:27:05.928 "superblock": false, 00:27:05.928 "num_base_bdevs": 3, 00:27:05.928 "num_base_bdevs_discovered": 3, 00:27:05.928 "num_base_bdevs_operational": 3, 00:27:05.928 "process": { 00:27:05.928 "type": "rebuild", 00:27:05.928 "target": "spare", 00:27:05.928 "progress": { 00:27:05.928 "blocks": 24576, 00:27:05.928 "percent": 18 00:27:05.928 } 00:27:05.928 }, 00:27:05.928 "base_bdevs_list": [ 00:27:05.928 { 00:27:05.928 "name": "spare", 00:27:05.928 "uuid": "fa6ee191-8660-5ffd-b470-01221ba90c1d", 00:27:05.928 "is_configured": true, 00:27:05.928 "data_offset": 0, 00:27:05.928 "data_size": 65536 00:27:05.928 }, 00:27:05.928 { 00:27:05.928 "name": "BaseBdev2", 00:27:05.928 "uuid": "92219f0c-8473-4034-be64-614339ed8587", 00:27:05.928 "is_configured": true, 00:27:05.928 "data_offset": 0, 00:27:05.928 "data_size": 65536 00:27:05.928 }, 00:27:05.928 { 00:27:05.928 "name": "BaseBdev3", 00:27:05.928 "uuid": "0b5f237a-67de-43f7-94cc-e97183c0c7ec", 00:27:05.928 "is_configured": true, 00:27:05.928 "data_offset": 0, 00:27:05.928 "data_size": 65536 00:27:05.928 } 00:27:05.928 ] 00:27:05.928 }' 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@657 -- # local timeout=682 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.928 00:40:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.186 00:40:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:06.186 "name": "raid_bdev1", 00:27:06.186 "uuid": "8c0a79e1-181e-4fd7-8887-6bb356239aab", 00:27:06.186 "strip_size_kb": 64, 00:27:06.186 "state": "online", 00:27:06.186 "raid_level": "raid5f", 00:27:06.186 "superblock": false, 00:27:06.186 "num_base_bdevs": 3, 00:27:06.186 "num_base_bdevs_discovered": 3, 00:27:06.186 "num_base_bdevs_operational": 3, 00:27:06.186 "process": { 00:27:06.186 "type": "rebuild", 00:27:06.186 "target": "spare", 00:27:06.186 "progress": { 00:27:06.186 "blocks": 32768, 00:27:06.186 "percent": 25 00:27:06.186 } 00:27:06.186 }, 00:27:06.186 "base_bdevs_list": [ 00:27:06.186 { 00:27:06.186 "name": "spare", 00:27:06.186 "uuid": "fa6ee191-8660-5ffd-b470-01221ba90c1d", 00:27:06.186 "is_configured": true, 00:27:06.186 "data_offset": 0, 00:27:06.186 "data_size": 65536 00:27:06.186 }, 00:27:06.186 { 00:27:06.186 "name": "BaseBdev2", 00:27:06.186 "uuid": "92219f0c-8473-4034-be64-614339ed8587", 00:27:06.186 "is_configured": true, 00:27:06.186 "data_offset": 0, 00:27:06.186 "data_size": 65536 00:27:06.186 }, 00:27:06.186 { 00:27:06.186 "name": "BaseBdev3", 00:27:06.186 "uuid": "0b5f237a-67de-43f7-94cc-e97183c0c7ec", 00:27:06.186 "is_configured": true, 00:27:06.186 "data_offset": 0, 00:27:06.186 "data_size": 65536 00:27:06.186 } 00:27:06.186 ] 00:27:06.186 }' 00:27:06.186 00:40:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:06.444 00:41:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:06.444 00:41:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:06.444 00:41:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:06.444 00:41:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:07.377 00:41:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:07.377 00:41:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:07.377 00:41:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:07.377 00:41:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:07.377 00:41:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:07.377 00:41:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:07.377 00:41:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.377 00:41:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.635 00:41:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:07.635 "name": "raid_bdev1", 00:27:07.635 "uuid": "8c0a79e1-181e-4fd7-8887-6bb356239aab", 00:27:07.635 "strip_size_kb": 64, 00:27:07.635 "state": "online", 00:27:07.635 "raid_level": "raid5f", 00:27:07.635 "superblock": false, 00:27:07.635 "num_base_bdevs": 3, 00:27:07.635 "num_base_bdevs_discovered": 3, 00:27:07.635 "num_base_bdevs_operational": 3, 00:27:07.635 "process": { 00:27:07.635 "type": "rebuild", 00:27:07.635 "target": "spare", 00:27:07.636 "progress": { 00:27:07.636 "blocks": 61440, 00:27:07.636 "percent": 46 00:27:07.636 } 00:27:07.636 }, 00:27:07.636 "base_bdevs_list": [ 00:27:07.636 { 00:27:07.636 "name": "spare", 00:27:07.636 "uuid": "fa6ee191-8660-5ffd-b470-01221ba90c1d", 00:27:07.636 "is_configured": true, 00:27:07.636 "data_offset": 0, 00:27:07.636 "data_size": 65536 00:27:07.636 }, 00:27:07.636 { 00:27:07.636 "name": "BaseBdev2", 00:27:07.636 "uuid": "92219f0c-8473-4034-be64-614339ed8587", 00:27:07.636 "is_configured": true, 00:27:07.636 "data_offset": 0, 00:27:07.636 "data_size": 65536 00:27:07.636 }, 00:27:07.636 { 00:27:07.636 "name": "BaseBdev3", 00:27:07.636 "uuid": "0b5f237a-67de-43f7-94cc-e97183c0c7ec", 00:27:07.636 "is_configured": true, 00:27:07.636 "data_offset": 0, 00:27:07.636 "data_size": 65536 00:27:07.636 } 00:27:07.636 ] 00:27:07.636 }' 00:27:07.636 00:41:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:07.893 00:41:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:07.893 00:41:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:07.893 00:41:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:07.893 00:41:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:08.839 00:41:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:08.839 00:41:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:08.839 00:41:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:08.839 00:41:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:08.839 00:41:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:08.839 00:41:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:08.839 00:41:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.839 00:41:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.096 00:41:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:09.096 "name": "raid_bdev1", 00:27:09.096 "uuid": "8c0a79e1-181e-4fd7-8887-6bb356239aab", 00:27:09.096 "strip_size_kb": 64, 00:27:09.096 "state": "online", 00:27:09.096 "raid_level": "raid5f", 00:27:09.096 "superblock": false, 00:27:09.096 "num_base_bdevs": 3, 00:27:09.096 "num_base_bdevs_discovered": 3, 00:27:09.096 "num_base_bdevs_operational": 3, 00:27:09.096 "process": { 00:27:09.096 "type": "rebuild", 00:27:09.096 "target": "spare", 00:27:09.096 "progress": { 00:27:09.096 "blocks": 90112, 00:27:09.096 "percent": 68 00:27:09.096 } 00:27:09.096 }, 00:27:09.096 "base_bdevs_list": [ 00:27:09.097 { 00:27:09.097 "name": "spare", 00:27:09.097 "uuid": "fa6ee191-8660-5ffd-b470-01221ba90c1d", 00:27:09.097 "is_configured": true, 00:27:09.097 "data_offset": 0, 00:27:09.097 "data_size": 65536 00:27:09.097 }, 00:27:09.097 { 00:27:09.097 "name": "BaseBdev2", 00:27:09.097 "uuid": "92219f0c-8473-4034-be64-614339ed8587", 00:27:09.097 "is_configured": true, 00:27:09.097 "data_offset": 0, 00:27:09.097 "data_size": 65536 00:27:09.097 }, 00:27:09.097 { 00:27:09.097 "name": "BaseBdev3", 00:27:09.097 "uuid": "0b5f237a-67de-43f7-94cc-e97183c0c7ec", 00:27:09.097 "is_configured": true, 00:27:09.097 "data_offset": 0, 00:27:09.097 "data_size": 65536 00:27:09.097 } 00:27:09.097 ] 00:27:09.097 }' 00:27:09.097 00:41:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:09.097 00:41:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:09.097 00:41:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:09.355 00:41:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:09.355 00:41:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:10.289 00:41:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:10.289 00:41:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:10.289 00:41:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:10.289 00:41:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:10.289 00:41:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:10.289 00:41:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:10.289 00:41:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.289 00:41:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.546 00:41:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:10.546 "name": "raid_bdev1", 00:27:10.546 "uuid": "8c0a79e1-181e-4fd7-8887-6bb356239aab", 00:27:10.546 "strip_size_kb": 64, 00:27:10.546 "state": "online", 00:27:10.546 "raid_level": "raid5f", 00:27:10.546 "superblock": false, 00:27:10.546 "num_base_bdevs": 3, 00:27:10.546 "num_base_bdevs_discovered": 3, 00:27:10.546 "num_base_bdevs_operational": 3, 00:27:10.546 "process": { 00:27:10.546 "type": "rebuild", 00:27:10.546 "target": "spare", 00:27:10.546 "progress": { 00:27:10.546 "blocks": 116736, 00:27:10.546 "percent": 89 00:27:10.546 } 00:27:10.546 }, 00:27:10.546 "base_bdevs_list": [ 00:27:10.546 { 00:27:10.546 "name": "spare", 00:27:10.546 "uuid": "fa6ee191-8660-5ffd-b470-01221ba90c1d", 00:27:10.546 "is_configured": true, 00:27:10.546 "data_offset": 0, 00:27:10.546 "data_size": 65536 00:27:10.546 }, 00:27:10.546 { 00:27:10.546 "name": "BaseBdev2", 00:27:10.546 "uuid": "92219f0c-8473-4034-be64-614339ed8587", 00:27:10.546 "is_configured": true, 00:27:10.546 "data_offset": 0, 00:27:10.546 "data_size": 65536 00:27:10.546 }, 00:27:10.546 { 00:27:10.546 "name": "BaseBdev3", 00:27:10.546 "uuid": "0b5f237a-67de-43f7-94cc-e97183c0c7ec", 00:27:10.546 "is_configured": true, 00:27:10.546 "data_offset": 0, 00:27:10.546 "data_size": 65536 00:27:10.546 } 00:27:10.546 ] 00:27:10.546 }' 00:27:10.546 00:41:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:10.546 00:41:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:10.546 00:41:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:10.546 00:41:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:10.546 00:41:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:11.111 [2024-04-24 00:41:04.774525] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:11.111 [2024-04-24 00:41:04.775048] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:11.111 [2024-04-24 00:41:04.775352] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:11.676 00:41:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:11.676 00:41:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:11.676 00:41:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:11.676 00:41:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:11.676 00:41:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:11.676 00:41:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:11.676 00:41:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.676 00:41:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.934 00:41:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:11.934 "name": "raid_bdev1", 00:27:11.934 "uuid": "8c0a79e1-181e-4fd7-8887-6bb356239aab", 00:27:11.934 "strip_size_kb": 64, 00:27:11.934 "state": "online", 00:27:11.934 "raid_level": "raid5f", 00:27:11.934 "superblock": false, 00:27:11.934 "num_base_bdevs": 3, 00:27:11.934 "num_base_bdevs_discovered": 3, 00:27:11.934 "num_base_bdevs_operational": 3, 00:27:11.934 "base_bdevs_list": [ 00:27:11.934 { 00:27:11.934 "name": "spare", 00:27:11.934 "uuid": "fa6ee191-8660-5ffd-b470-01221ba90c1d", 00:27:11.934 "is_configured": true, 00:27:11.934 "data_offset": 0, 00:27:11.934 "data_size": 65536 00:27:11.934 }, 00:27:11.934 { 00:27:11.934 "name": "BaseBdev2", 00:27:11.934 "uuid": "92219f0c-8473-4034-be64-614339ed8587", 00:27:11.934 "is_configured": true, 00:27:11.934 "data_offset": 0, 00:27:11.934 "data_size": 65536 00:27:11.934 }, 00:27:11.934 { 00:27:11.934 "name": "BaseBdev3", 00:27:11.934 "uuid": "0b5f237a-67de-43f7-94cc-e97183c0c7ec", 00:27:11.934 "is_configured": true, 00:27:11.934 "data_offset": 0, 00:27:11.934 "data_size": 65536 00:27:11.934 } 00:27:11.934 ] 00:27:11.934 }' 00:27:11.934 00:41:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:11.934 00:41:05 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:11.934 00:41:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:11.934 00:41:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:27:11.934 00:41:05 -- bdev/bdev_raid.sh@660 -- # break 00:27:11.934 00:41:05 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:11.934 00:41:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:11.934 00:41:05 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:11.934 00:41:05 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:11.934 00:41:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:11.934 00:41:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.934 00:41:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.500 00:41:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:12.500 "name": "raid_bdev1", 00:27:12.500 "uuid": "8c0a79e1-181e-4fd7-8887-6bb356239aab", 00:27:12.500 "strip_size_kb": 64, 00:27:12.500 "state": "online", 00:27:12.500 "raid_level": "raid5f", 00:27:12.500 "superblock": false, 00:27:12.500 "num_base_bdevs": 3, 00:27:12.500 "num_base_bdevs_discovered": 3, 00:27:12.500 "num_base_bdevs_operational": 3, 00:27:12.500 "base_bdevs_list": [ 00:27:12.500 { 00:27:12.500 "name": "spare", 00:27:12.500 "uuid": "fa6ee191-8660-5ffd-b470-01221ba90c1d", 00:27:12.500 "is_configured": true, 00:27:12.500 "data_offset": 0, 00:27:12.500 "data_size": 65536 00:27:12.500 }, 00:27:12.500 { 00:27:12.500 "name": "BaseBdev2", 00:27:12.500 "uuid": "92219f0c-8473-4034-be64-614339ed8587", 00:27:12.500 "is_configured": true, 00:27:12.500 "data_offset": 0, 00:27:12.500 "data_size": 65536 00:27:12.500 }, 00:27:12.500 { 00:27:12.500 "name": "BaseBdev3", 00:27:12.500 "uuid": "0b5f237a-67de-43f7-94cc-e97183c0c7ec", 00:27:12.500 "is_configured": true, 00:27:12.500 "data_offset": 0, 00:27:12.500 "data_size": 65536 00:27:12.500 } 00:27:12.500 ] 00:27:12.500 }' 00:27:12.500 00:41:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:12.500 00:41:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:12.500 00:41:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:12.500 00:41:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:12.500 00:41:06 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:12.501 00:41:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:12.501 00:41:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:12.501 00:41:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:12.501 00:41:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:12.501 00:41:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:12.501 00:41:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:12.501 00:41:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:12.501 00:41:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:12.501 00:41:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:12.501 00:41:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.501 00:41:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.760 00:41:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:12.760 "name": "raid_bdev1", 00:27:12.760 "uuid": "8c0a79e1-181e-4fd7-8887-6bb356239aab", 00:27:12.760 "strip_size_kb": 64, 00:27:12.760 "state": "online", 00:27:12.760 "raid_level": "raid5f", 00:27:12.760 "superblock": false, 00:27:12.760 "num_base_bdevs": 3, 00:27:12.760 "num_base_bdevs_discovered": 3, 00:27:12.760 "num_base_bdevs_operational": 3, 00:27:12.760 "base_bdevs_list": [ 00:27:12.760 { 00:27:12.760 "name": "spare", 00:27:12.760 "uuid": "fa6ee191-8660-5ffd-b470-01221ba90c1d", 00:27:12.760 "is_configured": true, 00:27:12.760 "data_offset": 0, 00:27:12.760 "data_size": 65536 00:27:12.760 }, 00:27:12.760 { 00:27:12.760 "name": "BaseBdev2", 00:27:12.760 "uuid": "92219f0c-8473-4034-be64-614339ed8587", 00:27:12.760 "is_configured": true, 00:27:12.760 "data_offset": 0, 00:27:12.760 "data_size": 65536 00:27:12.760 }, 00:27:12.760 { 00:27:12.760 "name": "BaseBdev3", 00:27:12.760 "uuid": "0b5f237a-67de-43f7-94cc-e97183c0c7ec", 00:27:12.760 "is_configured": true, 00:27:12.760 "data_offset": 0, 00:27:12.760 "data_size": 65536 00:27:12.760 } 00:27:12.760 ] 00:27:12.760 }' 00:27:12.760 00:41:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:12.760 00:41:06 -- common/autotest_common.sh@10 -- # set +x 00:27:13.327 00:41:07 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:13.585 [2024-04-24 00:41:07.257561] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:13.585 [2024-04-24 00:41:07.257814] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:13.585 [2024-04-24 00:41:07.258000] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:13.585 [2024-04-24 00:41:07.258205] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:13.585 [2024-04-24 00:41:07.258383] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:27:13.585 00:41:07 -- bdev/bdev_raid.sh@671 -- # jq length 00:27:13.585 00:41:07 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.844 00:41:07 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:27:13.844 00:41:07 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:27:13.844 00:41:07 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:13.844 00:41:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:13.844 00:41:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:13.844 00:41:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:13.844 00:41:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:13.844 00:41:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:13.844 00:41:07 -- bdev/nbd_common.sh@12 -- # local i 00:27:13.844 00:41:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:13.844 00:41:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:13.844 00:41:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:14.103 /dev/nbd0 00:27:14.103 00:41:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:14.103 00:41:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:14.103 00:41:07 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:27:14.103 00:41:07 -- common/autotest_common.sh@855 -- # local i 00:27:14.103 00:41:07 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:14.103 00:41:07 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:14.103 00:41:07 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:27:14.103 00:41:07 -- common/autotest_common.sh@859 -- # break 00:27:14.103 00:41:07 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:14.103 00:41:07 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:14.103 00:41:07 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:14.103 1+0 records in 00:27:14.103 1+0 records out 00:27:14.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566372 s, 7.2 MB/s 00:27:14.103 00:41:07 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:14.103 00:41:07 -- common/autotest_common.sh@872 -- # size=4096 00:27:14.103 00:41:07 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:14.103 00:41:07 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:14.103 00:41:07 -- common/autotest_common.sh@875 -- # return 0 00:27:14.103 00:41:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:14.103 00:41:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:14.103 00:41:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:14.670 /dev/nbd1 00:27:14.670 00:41:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:14.670 00:41:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:14.670 00:41:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:27:14.670 00:41:08 -- common/autotest_common.sh@855 -- # local i 00:27:14.670 00:41:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:14.670 00:41:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:14.670 00:41:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:27:14.670 00:41:08 -- common/autotest_common.sh@859 -- # break 00:27:14.670 00:41:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:14.670 00:41:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:14.670 00:41:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:14.670 1+0 records in 00:27:14.670 1+0 records out 00:27:14.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708565 s, 5.8 MB/s 00:27:14.670 00:41:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:14.670 00:41:08 -- common/autotest_common.sh@872 -- # size=4096 00:27:14.670 00:41:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:14.670 00:41:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:14.670 00:41:08 -- common/autotest_common.sh@875 -- # return 0 00:27:14.670 00:41:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:14.670 00:41:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:14.670 00:41:08 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:14.670 00:41:08 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:14.670 00:41:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:14.670 00:41:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:14.670 00:41:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:14.670 00:41:08 -- bdev/nbd_common.sh@51 -- # local i 00:27:14.670 00:41:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:14.670 00:41:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:15.236 00:41:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:15.236 00:41:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:15.236 00:41:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:15.236 00:41:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:15.236 00:41:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:15.236 00:41:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:15.236 00:41:08 -- bdev/nbd_common.sh@41 -- # break 00:27:15.236 00:41:08 -- bdev/nbd_common.sh@45 -- # return 0 00:27:15.236 00:41:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:15.236 00:41:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:15.494 00:41:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:15.494 00:41:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:15.494 00:41:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:15.494 00:41:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:15.494 00:41:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:15.495 00:41:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:15.495 00:41:09 -- bdev/nbd_common.sh@41 -- # break 00:27:15.495 00:41:09 -- bdev/nbd_common.sh@45 -- # return 0 00:27:15.495 00:41:09 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:27:15.495 00:41:09 -- bdev/bdev_raid.sh@709 -- # killprocess 137430 00:27:15.495 00:41:09 -- common/autotest_common.sh@936 -- # '[' -z 137430 ']' 00:27:15.495 00:41:09 -- common/autotest_common.sh@940 -- # kill -0 137430 00:27:15.495 00:41:09 -- common/autotest_common.sh@941 -- # uname 00:27:15.495 00:41:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:15.495 00:41:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137430 00:27:15.495 00:41:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:15.495 00:41:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:15.495 00:41:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137430' 00:27:15.495 killing process with pid 137430 00:27:15.495 00:41:09 -- common/autotest_common.sh@955 -- # kill 137430 00:27:15.495 Received shutdown signal, test time was about 60.000000 seconds 00:27:15.495 00:27:15.495 Latency(us) 00:27:15.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.495 =================================================================================================================== 00:27:15.495 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:15.495 00:41:09 -- common/autotest_common.sh@960 -- # wait 137430 00:27:15.495 [2024-04-24 00:41:09.146473] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:16.062 [2024-04-24 00:41:09.575505] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:17.440 ************************************ 00:27:17.440 END TEST raid5f_rebuild_test 00:27:17.440 ************************************ 00:27:17.440 00:41:10 -- bdev/bdev_raid.sh@711 -- # return 0 00:27:17.440 00:27:17.440 real 0m22.937s 00:27:17.440 user 0m33.802s 00:27:17.440 sys 0m3.428s 00:27:17.440 00:41:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:17.440 00:41:10 -- common/autotest_common.sh@10 -- # set +x 00:27:17.440 00:41:10 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:27:17.440 00:41:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:27:17.440 00:41:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:17.440 00:41:10 -- common/autotest_common.sh@10 -- # set +x 00:27:17.440 ************************************ 00:27:17.440 START TEST raid5f_rebuild_test_sb 00:27:17.440 ************************************ 00:27:17.440 00:41:11 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 3 true false 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@544 -- # raid_pid=137994 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137994 /var/tmp/spdk-raid.sock 00:27:17.440 00:41:11 -- common/autotest_common.sh@817 -- # '[' -z 137994 ']' 00:27:17.440 00:41:11 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:17.440 00:41:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:17.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:17.440 00:41:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:17.440 00:41:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:17.440 00:41:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:17.440 00:41:11 -- common/autotest_common.sh@10 -- # set +x 00:27:17.440 [2024-04-24 00:41:11.140371] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:27:17.440 [2024-04-24 00:41:11.140798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137994 ] 00:27:17.440 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:17.440 Zero copy mechanism will not be used. 00:27:17.699 [2024-04-24 00:41:11.307617] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.959 [2024-04-24 00:41:11.576065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.218 [2024-04-24 00:41:11.812758] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:18.476 00:41:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:18.476 00:41:12 -- common/autotest_common.sh@850 -- # return 0 00:27:18.476 00:41:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:18.476 00:41:12 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:18.476 00:41:12 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:18.736 BaseBdev1_malloc 00:27:18.736 00:41:12 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:19.007 [2024-04-24 00:41:12.734551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:19.007 [2024-04-24 00:41:12.734907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:19.007 [2024-04-24 00:41:12.735083] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:27:19.007 [2024-04-24 00:41:12.735219] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:19.007 [2024-04-24 00:41:12.737771] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:19.007 [2024-04-24 00:41:12.737938] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:19.007 BaseBdev1 00:27:19.007 00:41:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:19.007 00:41:12 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:19.007 00:41:12 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:19.271 BaseBdev2_malloc 00:27:19.271 00:41:13 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:19.530 [2024-04-24 00:41:13.211441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:19.530 [2024-04-24 00:41:13.211765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:19.530 [2024-04-24 00:41:13.211952] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:19.530 [2024-04-24 00:41:13.212126] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:19.530 [2024-04-24 00:41:13.214864] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:19.530 [2024-04-24 00:41:13.215066] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:19.530 BaseBdev2 00:27:19.530 00:41:13 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:19.530 00:41:13 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:19.530 00:41:13 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:19.790 BaseBdev3_malloc 00:27:19.790 00:41:13 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:20.049 [2024-04-24 00:41:13.664033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:20.049 [2024-04-24 00:41:13.664330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.049 [2024-04-24 00:41:13.664491] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:27:20.049 [2024-04-24 00:41:13.664657] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.049 [2024-04-24 00:41:13.667046] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.049 [2024-04-24 00:41:13.667207] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:20.049 BaseBdev3 00:27:20.049 00:41:13 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:20.309 spare_malloc 00:27:20.309 00:41:13 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:20.566 spare_delay 00:27:20.566 00:41:14 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:20.824 [2024-04-24 00:41:14.404890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:20.824 [2024-04-24 00:41:14.405182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.824 [2024-04-24 00:41:14.405298] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:20.824 [2024-04-24 00:41:14.405428] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.824 [2024-04-24 00:41:14.407939] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.824 [2024-04-24 00:41:14.408126] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:20.824 spare 00:27:20.824 00:41:14 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:27:20.824 [2024-04-24 00:41:14.605044] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:20.824 [2024-04-24 00:41:14.607365] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:20.824 [2024-04-24 00:41:14.607583] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:20.824 [2024-04-24 00:41:14.607912] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:27:20.824 [2024-04-24 00:41:14.608053] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:20.824 [2024-04-24 00:41:14.608227] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:27:20.824 [2024-04-24 00:41:14.613638] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:27:20.824 [2024-04-24 00:41:14.613754] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:27:20.824 [2024-04-24 00:41:14.614044] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:21.082 00:41:14 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:21.082 00:41:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:21.082 00:41:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:21.082 00:41:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:21.082 00:41:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:21.082 00:41:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:21.082 00:41:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:21.082 00:41:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:21.082 00:41:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:21.082 00:41:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:21.082 00:41:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.082 00:41:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.082 00:41:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:21.082 "name": "raid_bdev1", 00:27:21.082 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:21.082 "strip_size_kb": 64, 00:27:21.082 "state": "online", 00:27:21.082 "raid_level": "raid5f", 00:27:21.082 "superblock": true, 00:27:21.082 "num_base_bdevs": 3, 00:27:21.082 "num_base_bdevs_discovered": 3, 00:27:21.082 "num_base_bdevs_operational": 3, 00:27:21.082 "base_bdevs_list": [ 00:27:21.082 { 00:27:21.082 "name": "BaseBdev1", 00:27:21.082 "uuid": "1684a6ba-c00d-54bf-9c76-baf012fe0d1b", 00:27:21.082 "is_configured": true, 00:27:21.082 "data_offset": 2048, 00:27:21.082 "data_size": 63488 00:27:21.082 }, 00:27:21.082 { 00:27:21.083 "name": "BaseBdev2", 00:27:21.083 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:21.083 "is_configured": true, 00:27:21.083 "data_offset": 2048, 00:27:21.083 "data_size": 63488 00:27:21.083 }, 00:27:21.083 { 00:27:21.083 "name": "BaseBdev3", 00:27:21.083 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:21.083 "is_configured": true, 00:27:21.083 "data_offset": 2048, 00:27:21.083 "data_size": 63488 00:27:21.083 } 00:27:21.083 ] 00:27:21.083 }' 00:27:21.083 00:41:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:21.083 00:41:14 -- common/autotest_common.sh@10 -- # set +x 00:27:22.019 00:41:15 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:22.019 00:41:15 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:27:22.019 [2024-04-24 00:41:15.733154] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:22.019 00:41:15 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:27:22.019 00:41:15 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.019 00:41:15 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:22.279 00:41:16 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:27:22.279 00:41:16 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:27:22.279 00:41:16 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:27:22.279 00:41:16 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:22.279 00:41:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:22.279 00:41:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:22.279 00:41:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:22.279 00:41:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:22.279 00:41:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:22.279 00:41:16 -- bdev/nbd_common.sh@12 -- # local i 00:27:22.279 00:41:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:22.279 00:41:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:22.279 00:41:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:22.538 [2024-04-24 00:41:16.269215] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:22.538 /dev/nbd0 00:27:22.538 00:41:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:22.538 00:41:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:22.538 00:41:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:27:22.538 00:41:16 -- common/autotest_common.sh@855 -- # local i 00:27:22.538 00:41:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:22.538 00:41:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:22.538 00:41:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:27:22.796 00:41:16 -- common/autotest_common.sh@859 -- # break 00:27:22.796 00:41:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:22.796 00:41:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:22.796 00:41:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:22.796 1+0 records in 00:27:22.796 1+0 records out 00:27:22.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548343 s, 7.5 MB/s 00:27:22.796 00:41:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:22.796 00:41:16 -- common/autotest_common.sh@872 -- # size=4096 00:27:22.796 00:41:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:22.796 00:41:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:22.796 00:41:16 -- common/autotest_common.sh@875 -- # return 0 00:27:22.796 00:41:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:22.796 00:41:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:22.796 00:41:16 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:27:22.797 00:41:16 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:27:22.797 00:41:16 -- bdev/bdev_raid.sh@582 -- # echo 128 00:27:22.797 00:41:16 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:27:23.055 496+0 records in 00:27:23.055 496+0 records out 00:27:23.055 65011712 bytes (65 MB, 62 MiB) copied, 0.488428 s, 133 MB/s 00:27:23.055 00:41:16 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:23.055 00:41:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:23.055 00:41:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:23.055 00:41:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:23.055 00:41:16 -- bdev/nbd_common.sh@51 -- # local i 00:27:23.055 00:41:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:23.055 00:41:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:23.625 00:41:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:23.625 [2024-04-24 00:41:17.130140] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:23.625 00:41:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:23.625 00:41:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:23.625 00:41:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:23.625 00:41:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:23.626 00:41:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:23.626 00:41:17 -- bdev/nbd_common.sh@41 -- # break 00:27:23.626 00:41:17 -- bdev/nbd_common.sh@45 -- # return 0 00:27:23.626 00:41:17 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:23.626 [2024-04-24 00:41:17.378609] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:23.626 00:41:17 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:23.626 00:41:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:23.626 00:41:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:23.626 00:41:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:23.626 00:41:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:23.626 00:41:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:23.626 00:41:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:23.626 00:41:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:23.626 00:41:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:23.626 00:41:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:23.626 00:41:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.626 00:41:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.883 00:41:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:23.883 "name": "raid_bdev1", 00:27:23.883 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:23.883 "strip_size_kb": 64, 00:27:23.883 "state": "online", 00:27:23.883 "raid_level": "raid5f", 00:27:23.883 "superblock": true, 00:27:23.883 "num_base_bdevs": 3, 00:27:23.883 "num_base_bdevs_discovered": 2, 00:27:23.883 "num_base_bdevs_operational": 2, 00:27:23.883 "base_bdevs_list": [ 00:27:23.883 { 00:27:23.883 "name": null, 00:27:23.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.883 "is_configured": false, 00:27:23.883 "data_offset": 2048, 00:27:23.883 "data_size": 63488 00:27:23.883 }, 00:27:23.883 { 00:27:23.883 "name": "BaseBdev2", 00:27:23.883 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:23.883 "is_configured": true, 00:27:23.883 "data_offset": 2048, 00:27:23.883 "data_size": 63488 00:27:23.883 }, 00:27:23.883 { 00:27:23.883 "name": "BaseBdev3", 00:27:23.883 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:23.883 "is_configured": true, 00:27:23.883 "data_offset": 2048, 00:27:23.883 "data_size": 63488 00:27:23.883 } 00:27:23.883 ] 00:27:23.883 }' 00:27:23.883 00:41:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:23.883 00:41:17 -- common/autotest_common.sh@10 -- # set +x 00:27:24.818 00:41:18 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:24.818 [2024-04-24 00:41:18.546953] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:24.818 [2024-04-24 00:41:18.547209] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:24.818 [2024-04-24 00:41:18.566223] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028b70 00:27:24.818 [2024-04-24 00:41:18.575223] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:24.818 00:41:18 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:27:25.813 00:41:19 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.813 00:41:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:25.813 00:41:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:25.813 00:41:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:25.813 00:41:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:25.813 00:41:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.813 00:41:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.382 00:41:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:26.382 "name": "raid_bdev1", 00:27:26.382 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:26.382 "strip_size_kb": 64, 00:27:26.382 "state": "online", 00:27:26.382 "raid_level": "raid5f", 00:27:26.382 "superblock": true, 00:27:26.382 "num_base_bdevs": 3, 00:27:26.382 "num_base_bdevs_discovered": 3, 00:27:26.382 "num_base_bdevs_operational": 3, 00:27:26.382 "process": { 00:27:26.382 "type": "rebuild", 00:27:26.382 "target": "spare", 00:27:26.382 "progress": { 00:27:26.382 "blocks": 24576, 00:27:26.382 "percent": 19 00:27:26.382 } 00:27:26.382 }, 00:27:26.382 "base_bdevs_list": [ 00:27:26.382 { 00:27:26.382 "name": "spare", 00:27:26.382 "uuid": "13826877-85ff-5e9b-a7a6-bcb9a01f9e73", 00:27:26.382 "is_configured": true, 00:27:26.382 "data_offset": 2048, 00:27:26.382 "data_size": 63488 00:27:26.382 }, 00:27:26.382 { 00:27:26.382 "name": "BaseBdev2", 00:27:26.382 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:26.382 "is_configured": true, 00:27:26.382 "data_offset": 2048, 00:27:26.382 "data_size": 63488 00:27:26.382 }, 00:27:26.382 { 00:27:26.382 "name": "BaseBdev3", 00:27:26.382 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:26.382 "is_configured": true, 00:27:26.382 "data_offset": 2048, 00:27:26.382 "data_size": 63488 00:27:26.382 } 00:27:26.382 ] 00:27:26.382 }' 00:27:26.382 00:41:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:26.382 00:41:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:26.382 00:41:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:26.382 00:41:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:26.382 00:41:19 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:26.640 [2024-04-24 00:41:20.261449] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:26.640 [2024-04-24 00:41:20.293695] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:26.640 [2024-04-24 00:41:20.293990] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.640 00:41:20 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:27:26.640 00:41:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:26.640 00:41:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:26.640 00:41:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:26.640 00:41:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:26.640 00:41:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:27:26.640 00:41:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:26.640 00:41:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:26.640 00:41:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:26.640 00:41:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:26.640 00:41:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.640 00:41:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.900 00:41:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:26.900 "name": "raid_bdev1", 00:27:26.900 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:26.900 "strip_size_kb": 64, 00:27:26.900 "state": "online", 00:27:26.900 "raid_level": "raid5f", 00:27:26.900 "superblock": true, 00:27:26.900 "num_base_bdevs": 3, 00:27:26.900 "num_base_bdevs_discovered": 2, 00:27:26.900 "num_base_bdevs_operational": 2, 00:27:26.900 "base_bdevs_list": [ 00:27:26.900 { 00:27:26.900 "name": null, 00:27:26.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.900 "is_configured": false, 00:27:26.900 "data_offset": 2048, 00:27:26.900 "data_size": 63488 00:27:26.900 }, 00:27:26.900 { 00:27:26.900 "name": "BaseBdev2", 00:27:26.900 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:26.900 "is_configured": true, 00:27:26.900 "data_offset": 2048, 00:27:26.900 "data_size": 63488 00:27:26.900 }, 00:27:26.900 { 00:27:26.900 "name": "BaseBdev3", 00:27:26.900 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:26.900 "is_configured": true, 00:27:26.900 "data_offset": 2048, 00:27:26.900 "data_size": 63488 00:27:26.900 } 00:27:26.900 ] 00:27:26.900 }' 00:27:26.900 00:41:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:26.900 00:41:20 -- common/autotest_common.sh@10 -- # set +x 00:27:27.488 00:41:21 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:27.488 00:41:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:27.488 00:41:21 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:27.488 00:41:21 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:27.488 00:41:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:27.488 00:41:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.488 00:41:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.745 00:41:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:27.745 "name": "raid_bdev1", 00:27:27.745 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:27.745 "strip_size_kb": 64, 00:27:27.745 "state": "online", 00:27:27.745 "raid_level": "raid5f", 00:27:27.745 "superblock": true, 00:27:27.745 "num_base_bdevs": 3, 00:27:27.745 "num_base_bdevs_discovered": 2, 00:27:27.745 "num_base_bdevs_operational": 2, 00:27:27.745 "base_bdevs_list": [ 00:27:27.745 { 00:27:27.746 "name": null, 00:27:27.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.746 "is_configured": false, 00:27:27.746 "data_offset": 2048, 00:27:27.746 "data_size": 63488 00:27:27.746 }, 00:27:27.746 { 00:27:27.746 "name": "BaseBdev2", 00:27:27.746 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:27.746 "is_configured": true, 00:27:27.746 "data_offset": 2048, 00:27:27.746 "data_size": 63488 00:27:27.746 }, 00:27:27.746 { 00:27:27.746 "name": "BaseBdev3", 00:27:27.746 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:27.746 "is_configured": true, 00:27:27.746 "data_offset": 2048, 00:27:27.746 "data_size": 63488 00:27:27.746 } 00:27:27.746 ] 00:27:27.746 }' 00:27:27.746 00:41:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:27.746 00:41:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:27.746 00:41:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:27.746 00:41:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:27.746 00:41:21 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:28.004 [2024-04-24 00:41:21.793585] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:28.004 [2024-04-24 00:41:21.793807] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:28.262 [2024-04-24 00:41:21.809710] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028d10 00:27:28.262 [2024-04-24 00:41:21.817608] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:28.262 00:41:21 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:27:29.195 00:41:22 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.195 00:41:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:29.195 00:41:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:29.195 00:41:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:29.195 00:41:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:29.195 00:41:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.195 00:41:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:29.472 "name": "raid_bdev1", 00:27:29.472 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:29.472 "strip_size_kb": 64, 00:27:29.472 "state": "online", 00:27:29.472 "raid_level": "raid5f", 00:27:29.472 "superblock": true, 00:27:29.472 "num_base_bdevs": 3, 00:27:29.472 "num_base_bdevs_discovered": 3, 00:27:29.472 "num_base_bdevs_operational": 3, 00:27:29.472 "process": { 00:27:29.472 "type": "rebuild", 00:27:29.472 "target": "spare", 00:27:29.472 "progress": { 00:27:29.472 "blocks": 24576, 00:27:29.472 "percent": 19 00:27:29.472 } 00:27:29.472 }, 00:27:29.472 "base_bdevs_list": [ 00:27:29.472 { 00:27:29.472 "name": "spare", 00:27:29.472 "uuid": "13826877-85ff-5e9b-a7a6-bcb9a01f9e73", 00:27:29.472 "is_configured": true, 00:27:29.472 "data_offset": 2048, 00:27:29.472 "data_size": 63488 00:27:29.472 }, 00:27:29.472 { 00:27:29.472 "name": "BaseBdev2", 00:27:29.472 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:29.472 "is_configured": true, 00:27:29.472 "data_offset": 2048, 00:27:29.472 "data_size": 63488 00:27:29.472 }, 00:27:29.472 { 00:27:29.472 "name": "BaseBdev3", 00:27:29.472 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:29.472 "is_configured": true, 00:27:29.472 "data_offset": 2048, 00:27:29.472 "data_size": 63488 00:27:29.472 } 00:27:29.472 ] 00:27:29.472 }' 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:27:29.472 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@657 -- # local timeout=706 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.472 00:41:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.731 00:41:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:29.731 "name": "raid_bdev1", 00:27:29.731 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:29.731 "strip_size_kb": 64, 00:27:29.731 "state": "online", 00:27:29.731 "raid_level": "raid5f", 00:27:29.731 "superblock": true, 00:27:29.731 "num_base_bdevs": 3, 00:27:29.731 "num_base_bdevs_discovered": 3, 00:27:29.731 "num_base_bdevs_operational": 3, 00:27:29.731 "process": { 00:27:29.731 "type": "rebuild", 00:27:29.731 "target": "spare", 00:27:29.731 "progress": { 00:27:29.731 "blocks": 32768, 00:27:29.731 "percent": 25 00:27:29.731 } 00:27:29.731 }, 00:27:29.731 "base_bdevs_list": [ 00:27:29.731 { 00:27:29.731 "name": "spare", 00:27:29.731 "uuid": "13826877-85ff-5e9b-a7a6-bcb9a01f9e73", 00:27:29.731 "is_configured": true, 00:27:29.731 "data_offset": 2048, 00:27:29.731 "data_size": 63488 00:27:29.731 }, 00:27:29.731 { 00:27:29.731 "name": "BaseBdev2", 00:27:29.731 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:29.731 "is_configured": true, 00:27:29.731 "data_offset": 2048, 00:27:29.731 "data_size": 63488 00:27:29.731 }, 00:27:29.731 { 00:27:29.731 "name": "BaseBdev3", 00:27:29.731 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:29.731 "is_configured": true, 00:27:29.731 "data_offset": 2048, 00:27:29.731 "data_size": 63488 00:27:29.731 } 00:27:29.731 ] 00:27:29.731 }' 00:27:29.731 00:41:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:29.990 00:41:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:29.990 00:41:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:29.990 00:41:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:29.990 00:41:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:30.931 00:41:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:30.931 00:41:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:30.931 00:41:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:30.931 00:41:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:30.931 00:41:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:30.931 00:41:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:30.931 00:41:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.931 00:41:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.191 00:41:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:31.191 "name": "raid_bdev1", 00:27:31.191 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:31.191 "strip_size_kb": 64, 00:27:31.191 "state": "online", 00:27:31.191 "raid_level": "raid5f", 00:27:31.191 "superblock": true, 00:27:31.191 "num_base_bdevs": 3, 00:27:31.191 "num_base_bdevs_discovered": 3, 00:27:31.191 "num_base_bdevs_operational": 3, 00:27:31.191 "process": { 00:27:31.191 "type": "rebuild", 00:27:31.191 "target": "spare", 00:27:31.191 "progress": { 00:27:31.191 "blocks": 59392, 00:27:31.191 "percent": 46 00:27:31.191 } 00:27:31.191 }, 00:27:31.191 "base_bdevs_list": [ 00:27:31.191 { 00:27:31.191 "name": "spare", 00:27:31.191 "uuid": "13826877-85ff-5e9b-a7a6-bcb9a01f9e73", 00:27:31.191 "is_configured": true, 00:27:31.191 "data_offset": 2048, 00:27:31.191 "data_size": 63488 00:27:31.191 }, 00:27:31.191 { 00:27:31.191 "name": "BaseBdev2", 00:27:31.191 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:31.191 "is_configured": true, 00:27:31.191 "data_offset": 2048, 00:27:31.191 "data_size": 63488 00:27:31.191 }, 00:27:31.191 { 00:27:31.191 "name": "BaseBdev3", 00:27:31.191 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:31.191 "is_configured": true, 00:27:31.191 "data_offset": 2048, 00:27:31.191 "data_size": 63488 00:27:31.191 } 00:27:31.191 ] 00:27:31.191 }' 00:27:31.191 00:41:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:31.191 00:41:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:31.191 00:41:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:31.191 00:41:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:31.191 00:41:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:32.563 00:41:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:32.563 00:41:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:32.563 00:41:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:32.563 00:41:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:32.563 00:41:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:32.563 00:41:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:32.563 00:41:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.563 00:41:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.563 00:41:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:32.563 "name": "raid_bdev1", 00:27:32.563 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:32.563 "strip_size_kb": 64, 00:27:32.563 "state": "online", 00:27:32.563 "raid_level": "raid5f", 00:27:32.563 "superblock": true, 00:27:32.563 "num_base_bdevs": 3, 00:27:32.563 "num_base_bdevs_discovered": 3, 00:27:32.563 "num_base_bdevs_operational": 3, 00:27:32.563 "process": { 00:27:32.563 "type": "rebuild", 00:27:32.563 "target": "spare", 00:27:32.563 "progress": { 00:27:32.563 "blocks": 88064, 00:27:32.563 "percent": 69 00:27:32.563 } 00:27:32.563 }, 00:27:32.563 "base_bdevs_list": [ 00:27:32.563 { 00:27:32.563 "name": "spare", 00:27:32.563 "uuid": "13826877-85ff-5e9b-a7a6-bcb9a01f9e73", 00:27:32.563 "is_configured": true, 00:27:32.563 "data_offset": 2048, 00:27:32.563 "data_size": 63488 00:27:32.563 }, 00:27:32.563 { 00:27:32.563 "name": "BaseBdev2", 00:27:32.563 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:32.563 "is_configured": true, 00:27:32.563 "data_offset": 2048, 00:27:32.563 "data_size": 63488 00:27:32.563 }, 00:27:32.563 { 00:27:32.563 "name": "BaseBdev3", 00:27:32.563 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:32.563 "is_configured": true, 00:27:32.563 "data_offset": 2048, 00:27:32.563 "data_size": 63488 00:27:32.563 } 00:27:32.563 ] 00:27:32.563 }' 00:27:32.563 00:41:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:32.563 00:41:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:32.563 00:41:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:32.563 00:41:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:32.563 00:41:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:33.974 "name": "raid_bdev1", 00:27:33.974 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:33.974 "strip_size_kb": 64, 00:27:33.974 "state": "online", 00:27:33.974 "raid_level": "raid5f", 00:27:33.974 "superblock": true, 00:27:33.974 "num_base_bdevs": 3, 00:27:33.974 "num_base_bdevs_discovered": 3, 00:27:33.974 "num_base_bdevs_operational": 3, 00:27:33.974 "process": { 00:27:33.974 "type": "rebuild", 00:27:33.974 "target": "spare", 00:27:33.974 "progress": { 00:27:33.974 "blocks": 114688, 00:27:33.974 "percent": 90 00:27:33.974 } 00:27:33.974 }, 00:27:33.974 "base_bdevs_list": [ 00:27:33.974 { 00:27:33.974 "name": "spare", 00:27:33.974 "uuid": "13826877-85ff-5e9b-a7a6-bcb9a01f9e73", 00:27:33.974 "is_configured": true, 00:27:33.974 "data_offset": 2048, 00:27:33.974 "data_size": 63488 00:27:33.974 }, 00:27:33.974 { 00:27:33.974 "name": "BaseBdev2", 00:27:33.974 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:33.974 "is_configured": true, 00:27:33.974 "data_offset": 2048, 00:27:33.974 "data_size": 63488 00:27:33.974 }, 00:27:33.974 { 00:27:33.974 "name": "BaseBdev3", 00:27:33.974 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:33.974 "is_configured": true, 00:27:33.974 "data_offset": 2048, 00:27:33.974 "data_size": 63488 00:27:33.974 } 00:27:33.974 ] 00:27:33.974 }' 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:33.974 00:41:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:34.539 [2024-04-24 00:41:28.084802] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:34.539 [2024-04-24 00:41:28.085097] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:34.539 [2024-04-24 00:41:28.085350] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.106 00:41:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:35.106 00:41:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:35.106 00:41:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:35.106 00:41:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:35.106 00:41:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:35.106 00:41:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:35.106 00:41:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.106 00:41:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.364 00:41:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:35.364 "name": "raid_bdev1", 00:27:35.364 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:35.364 "strip_size_kb": 64, 00:27:35.364 "state": "online", 00:27:35.364 "raid_level": "raid5f", 00:27:35.364 "superblock": true, 00:27:35.364 "num_base_bdevs": 3, 00:27:35.364 "num_base_bdevs_discovered": 3, 00:27:35.364 "num_base_bdevs_operational": 3, 00:27:35.364 "base_bdevs_list": [ 00:27:35.364 { 00:27:35.364 "name": "spare", 00:27:35.364 "uuid": "13826877-85ff-5e9b-a7a6-bcb9a01f9e73", 00:27:35.364 "is_configured": true, 00:27:35.364 "data_offset": 2048, 00:27:35.364 "data_size": 63488 00:27:35.364 }, 00:27:35.364 { 00:27:35.364 "name": "BaseBdev2", 00:27:35.364 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:35.364 "is_configured": true, 00:27:35.364 "data_offset": 2048, 00:27:35.364 "data_size": 63488 00:27:35.364 }, 00:27:35.364 { 00:27:35.364 "name": "BaseBdev3", 00:27:35.364 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:35.364 "is_configured": true, 00:27:35.364 "data_offset": 2048, 00:27:35.364 "data_size": 63488 00:27:35.364 } 00:27:35.364 ] 00:27:35.364 }' 00:27:35.364 00:41:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:35.364 00:41:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:35.364 00:41:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:35.364 00:41:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:27:35.364 00:41:29 -- bdev/bdev_raid.sh@660 -- # break 00:27:35.364 00:41:29 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:35.364 00:41:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:35.364 00:41:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:35.364 00:41:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:35.364 00:41:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:35.365 00:41:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.365 00:41:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.623 00:41:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:35.623 "name": "raid_bdev1", 00:27:35.623 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:35.623 "strip_size_kb": 64, 00:27:35.623 "state": "online", 00:27:35.623 "raid_level": "raid5f", 00:27:35.623 "superblock": true, 00:27:35.623 "num_base_bdevs": 3, 00:27:35.623 "num_base_bdevs_discovered": 3, 00:27:35.623 "num_base_bdevs_operational": 3, 00:27:35.624 "base_bdevs_list": [ 00:27:35.624 { 00:27:35.624 "name": "spare", 00:27:35.624 "uuid": "13826877-85ff-5e9b-a7a6-bcb9a01f9e73", 00:27:35.624 "is_configured": true, 00:27:35.624 "data_offset": 2048, 00:27:35.624 "data_size": 63488 00:27:35.624 }, 00:27:35.624 { 00:27:35.624 "name": "BaseBdev2", 00:27:35.624 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:35.624 "is_configured": true, 00:27:35.624 "data_offset": 2048, 00:27:35.624 "data_size": 63488 00:27:35.624 }, 00:27:35.624 { 00:27:35.624 "name": "BaseBdev3", 00:27:35.624 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:35.624 "is_configured": true, 00:27:35.624 "data_offset": 2048, 00:27:35.624 "data_size": 63488 00:27:35.624 } 00:27:35.624 ] 00:27:35.624 }' 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.624 00:41:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.882 00:41:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:35.882 "name": "raid_bdev1", 00:27:35.882 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:35.882 "strip_size_kb": 64, 00:27:35.882 "state": "online", 00:27:35.882 "raid_level": "raid5f", 00:27:35.882 "superblock": true, 00:27:35.882 "num_base_bdevs": 3, 00:27:35.882 "num_base_bdevs_discovered": 3, 00:27:35.882 "num_base_bdevs_operational": 3, 00:27:35.882 "base_bdevs_list": [ 00:27:35.882 { 00:27:35.882 "name": "spare", 00:27:35.882 "uuid": "13826877-85ff-5e9b-a7a6-bcb9a01f9e73", 00:27:35.882 "is_configured": true, 00:27:35.882 "data_offset": 2048, 00:27:35.882 "data_size": 63488 00:27:35.882 }, 00:27:35.882 { 00:27:35.882 "name": "BaseBdev2", 00:27:35.882 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:35.882 "is_configured": true, 00:27:35.882 "data_offset": 2048, 00:27:35.882 "data_size": 63488 00:27:35.882 }, 00:27:35.882 { 00:27:35.882 "name": "BaseBdev3", 00:27:35.882 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:35.882 "is_configured": true, 00:27:35.882 "data_offset": 2048, 00:27:35.882 "data_size": 63488 00:27:35.882 } 00:27:35.882 ] 00:27:35.882 }' 00:27:35.882 00:41:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:35.882 00:41:29 -- common/autotest_common.sh@10 -- # set +x 00:27:36.506 00:41:30 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:36.765 [2024-04-24 00:41:30.446347] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:36.765 [2024-04-24 00:41:30.446623] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:36.765 [2024-04-24 00:41:30.446835] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:36.765 [2024-04-24 00:41:30.447071] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:36.765 [2024-04-24 00:41:30.447206] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:27:36.765 00:41:30 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.765 00:41:30 -- bdev/bdev_raid.sh@671 -- # jq length 00:27:37.023 00:41:30 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:27:37.023 00:41:30 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:27:37.023 00:41:30 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:37.023 00:41:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:37.023 00:41:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:37.023 00:41:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:37.023 00:41:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:37.023 00:41:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:37.023 00:41:30 -- bdev/nbd_common.sh@12 -- # local i 00:27:37.023 00:41:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:37.023 00:41:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:37.023 00:41:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:37.280 /dev/nbd0 00:27:37.280 00:41:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:37.539 00:41:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:37.539 00:41:31 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:27:37.539 00:41:31 -- common/autotest_common.sh@855 -- # local i 00:27:37.539 00:41:31 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:37.539 00:41:31 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:37.539 00:41:31 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:27:37.539 00:41:31 -- common/autotest_common.sh@859 -- # break 00:27:37.539 00:41:31 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:37.539 00:41:31 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:37.539 00:41:31 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:37.539 1+0 records in 00:27:37.539 1+0 records out 00:27:37.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520961 s, 7.9 MB/s 00:27:37.539 00:41:31 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.539 00:41:31 -- common/autotest_common.sh@872 -- # size=4096 00:27:37.539 00:41:31 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.539 00:41:31 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:37.539 00:41:31 -- common/autotest_common.sh@875 -- # return 0 00:27:37.539 00:41:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:37.539 00:41:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:37.539 00:41:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:37.798 /dev/nbd1 00:27:37.798 00:41:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:37.798 00:41:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:37.798 00:41:31 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:27:37.798 00:41:31 -- common/autotest_common.sh@855 -- # local i 00:27:37.798 00:41:31 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:37.798 00:41:31 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:37.798 00:41:31 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:27:37.798 00:41:31 -- common/autotest_common.sh@859 -- # break 00:27:37.798 00:41:31 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:37.798 00:41:31 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:37.798 00:41:31 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:37.798 1+0 records in 00:27:37.798 1+0 records out 00:27:37.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067378 s, 6.1 MB/s 00:27:37.798 00:41:31 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.798 00:41:31 -- common/autotest_common.sh@872 -- # size=4096 00:27:37.798 00:41:31 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:37.798 00:41:31 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:37.798 00:41:31 -- common/autotest_common.sh@875 -- # return 0 00:27:37.798 00:41:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:37.798 00:41:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:37.798 00:41:31 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:38.056 00:41:31 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:38.056 00:41:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:38.056 00:41:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:38.056 00:41:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:38.056 00:41:31 -- bdev/nbd_common.sh@51 -- # local i 00:27:38.056 00:41:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:38.056 00:41:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:38.313 00:41:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:38.313 00:41:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:38.313 00:41:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:38.313 00:41:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:38.313 00:41:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:38.313 00:41:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:38.313 00:41:31 -- bdev/nbd_common.sh@41 -- # break 00:27:38.313 00:41:31 -- bdev/nbd_common.sh@45 -- # return 0 00:27:38.313 00:41:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:38.313 00:41:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:38.572 00:41:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:38.572 00:41:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:38.572 00:41:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:38.572 00:41:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:38.572 00:41:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:38.572 00:41:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:38.572 00:41:32 -- bdev/nbd_common.sh@41 -- # break 00:27:38.572 00:41:32 -- bdev/nbd_common.sh@45 -- # return 0 00:27:38.572 00:41:32 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:27:38.572 00:41:32 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:38.572 00:41:32 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:27:38.572 00:41:32 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:38.853 00:41:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:39.126 [2024-04-24 00:41:32.795116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:39.126 [2024-04-24 00:41:32.795466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.126 [2024-04-24 00:41:32.795552] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:27:39.126 [2024-04-24 00:41:32.795877] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.126 [2024-04-24 00:41:32.798766] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.126 [2024-04-24 00:41:32.799042] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:39.126 [2024-04-24 00:41:32.799314] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:39.126 [2024-04-24 00:41:32.799481] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:39.126 BaseBdev1 00:27:39.126 00:41:32 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:39.126 00:41:32 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:27:39.126 00:41:32 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:27:39.385 00:41:33 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:39.643 [2024-04-24 00:41:33.367560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:39.643 [2024-04-24 00:41:33.367883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.643 [2024-04-24 00:41:33.368080] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:27:39.643 [2024-04-24 00:41:33.368249] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.644 [2024-04-24 00:41:33.368905] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.644 [2024-04-24 00:41:33.369097] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:39.644 [2024-04-24 00:41:33.369356] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:27:39.644 [2024-04-24 00:41:33.369474] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:27:39.644 [2024-04-24 00:41:33.369563] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:39.644 [2024-04-24 00:41:33.369622] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:27:39.644 [2024-04-24 00:41:33.369815] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:39.644 BaseBdev2 00:27:39.644 00:41:33 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:39.644 00:41:33 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:27:39.644 00:41:33 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:27:39.901 00:41:33 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:40.159 [2024-04-24 00:41:33.919699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:40.159 [2024-04-24 00:41:33.920005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:40.159 [2024-04-24 00:41:33.920090] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:40.159 [2024-04-24 00:41:33.920197] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:40.159 [2024-04-24 00:41:33.920743] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:40.159 [2024-04-24 00:41:33.920923] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:40.159 [2024-04-24 00:41:33.921152] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:27:40.159 [2024-04-24 00:41:33.921282] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:40.159 BaseBdev3 00:27:40.159 00:41:33 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:40.418 00:41:34 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:40.986 [2024-04-24 00:41:34.475830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:40.986 [2024-04-24 00:41:34.476198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:40.986 [2024-04-24 00:41:34.476293] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:40.986 [2024-04-24 00:41:34.476484] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:40.986 [2024-04-24 00:41:34.477166] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:40.986 [2024-04-24 00:41:34.477357] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:40.986 [2024-04-24 00:41:34.477599] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:27:40.986 [2024-04-24 00:41:34.477736] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:40.986 spare 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.986 [2024-04-24 00:41:34.577981] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:27:40.986 [2024-04-24 00:41:34.578211] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:27:40.986 [2024-04-24 00:41:34.578429] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:27:40.986 [2024-04-24 00:41:34.585572] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:27:40.986 [2024-04-24 00:41:34.585754] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:27:40.986 [2024-04-24 00:41:34.586090] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:40.986 "name": "raid_bdev1", 00:27:40.986 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:40.986 "strip_size_kb": 64, 00:27:40.986 "state": "online", 00:27:40.986 "raid_level": "raid5f", 00:27:40.986 "superblock": true, 00:27:40.986 "num_base_bdevs": 3, 00:27:40.986 "num_base_bdevs_discovered": 3, 00:27:40.986 "num_base_bdevs_operational": 3, 00:27:40.986 "base_bdevs_list": [ 00:27:40.986 { 00:27:40.986 "name": "spare", 00:27:40.986 "uuid": "13826877-85ff-5e9b-a7a6-bcb9a01f9e73", 00:27:40.986 "is_configured": true, 00:27:40.986 "data_offset": 2048, 00:27:40.986 "data_size": 63488 00:27:40.986 }, 00:27:40.986 { 00:27:40.986 "name": "BaseBdev2", 00:27:40.986 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:40.986 "is_configured": true, 00:27:40.986 "data_offset": 2048, 00:27:40.986 "data_size": 63488 00:27:40.986 }, 00:27:40.986 { 00:27:40.986 "name": "BaseBdev3", 00:27:40.986 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:40.986 "is_configured": true, 00:27:40.986 "data_offset": 2048, 00:27:40.986 "data_size": 63488 00:27:40.986 } 00:27:40.986 ] 00:27:40.986 }' 00:27:40.986 00:41:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:40.986 00:41:34 -- common/autotest_common.sh@10 -- # set +x 00:27:41.948 00:41:35 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:41.948 00:41:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:41.948 00:41:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:41.948 00:41:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:41.948 00:41:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:41.948 00:41:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.948 00:41:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.948 00:41:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:41.948 "name": "raid_bdev1", 00:27:41.948 "uuid": "def9197e-bac7-462d-91ec-1ab6ee835e5c", 00:27:41.948 "strip_size_kb": 64, 00:27:41.948 "state": "online", 00:27:41.948 "raid_level": "raid5f", 00:27:41.948 "superblock": true, 00:27:41.948 "num_base_bdevs": 3, 00:27:41.948 "num_base_bdevs_discovered": 3, 00:27:41.948 "num_base_bdevs_operational": 3, 00:27:41.948 "base_bdevs_list": [ 00:27:41.948 { 00:27:41.948 "name": "spare", 00:27:41.948 "uuid": "13826877-85ff-5e9b-a7a6-bcb9a01f9e73", 00:27:41.948 "is_configured": true, 00:27:41.948 "data_offset": 2048, 00:27:41.948 "data_size": 63488 00:27:41.948 }, 00:27:41.948 { 00:27:41.948 "name": "BaseBdev2", 00:27:41.948 "uuid": "c7b3b731-da14-5533-ba60-ef52d150fc46", 00:27:41.948 "is_configured": true, 00:27:41.948 "data_offset": 2048, 00:27:41.948 "data_size": 63488 00:27:41.948 }, 00:27:41.948 { 00:27:41.948 "name": "BaseBdev3", 00:27:41.948 "uuid": "a77f0419-5e78-5b1b-9f1b-a5c40a67145d", 00:27:41.948 "is_configured": true, 00:27:41.948 "data_offset": 2048, 00:27:41.948 "data_size": 63488 00:27:41.948 } 00:27:41.948 ] 00:27:41.948 }' 00:27:41.948 00:41:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:41.948 00:41:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:41.948 00:41:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:42.206 00:41:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:42.206 00:41:35 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.206 00:41:35 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:42.465 00:41:36 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:27:42.465 00:41:36 -- bdev/bdev_raid.sh@709 -- # killprocess 137994 00:27:42.466 00:41:36 -- common/autotest_common.sh@936 -- # '[' -z 137994 ']' 00:27:42.466 00:41:36 -- common/autotest_common.sh@940 -- # kill -0 137994 00:27:42.466 00:41:36 -- common/autotest_common.sh@941 -- # uname 00:27:42.466 00:41:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:42.466 00:41:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137994 00:27:42.466 killing process with pid 137994 00:27:42.466 Received shutdown signal, test time was about 60.000000 seconds 00:27:42.466 00:27:42.466 Latency(us) 00:27:42.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.466 =================================================================================================================== 00:27:42.466 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:42.466 00:41:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:42.466 00:41:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:42.466 00:41:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137994' 00:27:42.466 00:41:36 -- common/autotest_common.sh@955 -- # kill 137994 00:27:42.466 00:41:36 -- common/autotest_common.sh@960 -- # wait 137994 00:27:42.466 [2024-04-24 00:41:36.097832] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:42.466 [2024-04-24 00:41:36.097928] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:42.466 [2024-04-24 00:41:36.098028] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:42.466 [2024-04-24 00:41:36.098092] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:27:43.031 [2024-04-24 00:41:36.535715] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:44.413 ************************************ 00:27:44.413 END TEST raid5f_rebuild_test_sb 00:27:44.413 ************************************ 00:27:44.413 00:41:37 -- bdev/bdev_raid.sh@711 -- # return 0 00:27:44.413 00:27:44.413 real 0m26.920s 00:27:44.413 user 0m41.348s 00:27:44.413 sys 0m3.987s 00:27:44.413 00:41:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:44.413 00:41:37 -- common/autotest_common.sh@10 -- # set +x 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:27:44.413 00:41:38 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:44.413 00:41:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:44.413 00:41:38 -- common/autotest_common.sh@10 -- # set +x 00:27:44.413 ************************************ 00:27:44.413 START TEST raid5f_state_function_test 00:27:44.413 ************************************ 00:27:44.413 00:41:38 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 4 false 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=138657 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 138657' 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:44.413 Process raid pid: 138657 00:27:44.413 00:41:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 138657 /var/tmp/spdk-raid.sock 00:27:44.413 00:41:38 -- common/autotest_common.sh@817 -- # '[' -z 138657 ']' 00:27:44.413 00:41:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:44.413 00:41:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:44.413 00:41:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:44.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:44.413 00:41:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:44.413 00:41:38 -- common/autotest_common.sh@10 -- # set +x 00:27:44.413 [2024-04-24 00:41:38.177964] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:27:44.413 [2024-04-24 00:41:38.178393] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.673 [2024-04-24 00:41:38.363671] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.932 [2024-04-24 00:41:38.636957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.190 [2024-04-24 00:41:38.834910] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:45.449 00:41:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:45.449 00:41:39 -- common/autotest_common.sh@850 -- # return 0 00:27:45.449 00:41:39 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:45.709 [2024-04-24 00:41:39.300759] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:45.709 [2024-04-24 00:41:39.301030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:45.709 [2024-04-24 00:41:39.301129] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:45.709 [2024-04-24 00:41:39.301187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:45.709 [2024-04-24 00:41:39.301268] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:45.709 [2024-04-24 00:41:39.301342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:45.709 [2024-04-24 00:41:39.301420] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:45.709 [2024-04-24 00:41:39.301471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:45.709 00:41:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:45.709 00:41:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:45.709 00:41:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:45.709 00:41:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:45.709 00:41:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:45.709 00:41:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:45.709 00:41:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:45.709 00:41:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:45.709 00:41:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:45.709 00:41:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:45.709 00:41:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:45.709 00:41:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.967 00:41:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:45.967 "name": "Existed_Raid", 00:27:45.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.967 "strip_size_kb": 64, 00:27:45.967 "state": "configuring", 00:27:45.967 "raid_level": "raid5f", 00:27:45.967 "superblock": false, 00:27:45.967 "num_base_bdevs": 4, 00:27:45.967 "num_base_bdevs_discovered": 0, 00:27:45.967 "num_base_bdevs_operational": 4, 00:27:45.967 "base_bdevs_list": [ 00:27:45.967 { 00:27:45.967 "name": "BaseBdev1", 00:27:45.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.967 "is_configured": false, 00:27:45.967 "data_offset": 0, 00:27:45.967 "data_size": 0 00:27:45.967 }, 00:27:45.967 { 00:27:45.967 "name": "BaseBdev2", 00:27:45.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.967 "is_configured": false, 00:27:45.967 "data_offset": 0, 00:27:45.967 "data_size": 0 00:27:45.967 }, 00:27:45.967 { 00:27:45.967 "name": "BaseBdev3", 00:27:45.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.967 "is_configured": false, 00:27:45.967 "data_offset": 0, 00:27:45.967 "data_size": 0 00:27:45.967 }, 00:27:45.967 { 00:27:45.967 "name": "BaseBdev4", 00:27:45.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.967 "is_configured": false, 00:27:45.967 "data_offset": 0, 00:27:45.967 "data_size": 0 00:27:45.967 } 00:27:45.967 ] 00:27:45.967 }' 00:27:45.967 00:41:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:45.967 00:41:39 -- common/autotest_common.sh@10 -- # set +x 00:27:46.593 00:41:40 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:46.852 [2024-04-24 00:41:40.500892] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:46.852 [2024-04-24 00:41:40.501106] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:27:46.852 00:41:40 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:47.110 [2024-04-24 00:41:40.772950] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:47.110 [2024-04-24 00:41:40.773199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:47.110 [2024-04-24 00:41:40.773288] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:47.110 [2024-04-24 00:41:40.773345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:47.110 [2024-04-24 00:41:40.773422] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:47.110 [2024-04-24 00:41:40.773498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:47.110 [2024-04-24 00:41:40.773680] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:47.110 [2024-04-24 00:41:40.773733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:47.110 00:41:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:27:47.367 [2024-04-24 00:41:41.072453] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:47.367 BaseBdev1 00:27:47.367 00:41:41 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:27:47.367 00:41:41 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:27:47.367 00:41:41 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:47.367 00:41:41 -- common/autotest_common.sh@887 -- # local i 00:27:47.367 00:41:41 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:47.367 00:41:41 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:47.367 00:41:41 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:47.626 00:41:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:47.885 [ 00:27:47.885 { 00:27:47.885 "name": "BaseBdev1", 00:27:47.885 "aliases": [ 00:27:47.885 "3f191ccb-3cb9-45d3-a426-eeb508da1b52" 00:27:47.885 ], 00:27:47.885 "product_name": "Malloc disk", 00:27:47.885 "block_size": 512, 00:27:47.885 "num_blocks": 65536, 00:27:47.885 "uuid": "3f191ccb-3cb9-45d3-a426-eeb508da1b52", 00:27:47.885 "assigned_rate_limits": { 00:27:47.885 "rw_ios_per_sec": 0, 00:27:47.885 "rw_mbytes_per_sec": 0, 00:27:47.885 "r_mbytes_per_sec": 0, 00:27:47.885 "w_mbytes_per_sec": 0 00:27:47.885 }, 00:27:47.885 "claimed": true, 00:27:47.885 "claim_type": "exclusive_write", 00:27:47.885 "zoned": false, 00:27:47.885 "supported_io_types": { 00:27:47.885 "read": true, 00:27:47.885 "write": true, 00:27:47.885 "unmap": true, 00:27:47.885 "write_zeroes": true, 00:27:47.885 "flush": true, 00:27:47.885 "reset": true, 00:27:47.885 "compare": false, 00:27:47.885 "compare_and_write": false, 00:27:47.885 "abort": true, 00:27:47.885 "nvme_admin": false, 00:27:47.885 "nvme_io": false 00:27:47.885 }, 00:27:47.885 "memory_domains": [ 00:27:47.885 { 00:27:47.885 "dma_device_id": "system", 00:27:47.885 "dma_device_type": 1 00:27:47.885 }, 00:27:47.885 { 00:27:47.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.885 "dma_device_type": 2 00:27:47.885 } 00:27:47.885 ], 00:27:47.885 "driver_specific": {} 00:27:47.885 } 00:27:47.885 ] 00:27:47.885 00:41:41 -- common/autotest_common.sh@893 -- # return 0 00:27:47.885 00:41:41 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:47.885 00:41:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:47.885 00:41:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:47.885 00:41:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:47.885 00:41:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:47.885 00:41:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:47.885 00:41:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:47.885 00:41:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:47.885 00:41:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:47.885 00:41:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:47.885 00:41:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.885 00:41:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:48.143 00:41:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:48.143 "name": "Existed_Raid", 00:27:48.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.143 "strip_size_kb": 64, 00:27:48.143 "state": "configuring", 00:27:48.143 "raid_level": "raid5f", 00:27:48.143 "superblock": false, 00:27:48.143 "num_base_bdevs": 4, 00:27:48.143 "num_base_bdevs_discovered": 1, 00:27:48.143 "num_base_bdevs_operational": 4, 00:27:48.143 "base_bdevs_list": [ 00:27:48.143 { 00:27:48.143 "name": "BaseBdev1", 00:27:48.143 "uuid": "3f191ccb-3cb9-45d3-a426-eeb508da1b52", 00:27:48.143 "is_configured": true, 00:27:48.143 "data_offset": 0, 00:27:48.143 "data_size": 65536 00:27:48.143 }, 00:27:48.143 { 00:27:48.143 "name": "BaseBdev2", 00:27:48.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.143 "is_configured": false, 00:27:48.143 "data_offset": 0, 00:27:48.143 "data_size": 0 00:27:48.143 }, 00:27:48.143 { 00:27:48.143 "name": "BaseBdev3", 00:27:48.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.143 "is_configured": false, 00:27:48.143 "data_offset": 0, 00:27:48.143 "data_size": 0 00:27:48.143 }, 00:27:48.143 { 00:27:48.143 "name": "BaseBdev4", 00:27:48.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.143 "is_configured": false, 00:27:48.143 "data_offset": 0, 00:27:48.143 "data_size": 0 00:27:48.143 } 00:27:48.143 ] 00:27:48.143 }' 00:27:48.143 00:41:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:48.143 00:41:41 -- common/autotest_common.sh@10 -- # set +x 00:27:48.720 00:41:42 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:48.978 [2024-04-24 00:41:42.624860] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:48.978 [2024-04-24 00:41:42.625100] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:27:48.978 00:41:42 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:27:48.978 00:41:42 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:27:49.278 [2024-04-24 00:41:42.904927] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:49.278 [2024-04-24 00:41:42.907186] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:49.278 [2024-04-24 00:41:42.907376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:49.278 [2024-04-24 00:41:42.907467] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:49.278 [2024-04-24 00:41:42.907527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:49.278 [2024-04-24 00:41:42.907604] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:27:49.278 [2024-04-24 00:41:42.907653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.278 00:41:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:49.538 00:41:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:49.538 "name": "Existed_Raid", 00:27:49.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.538 "strip_size_kb": 64, 00:27:49.538 "state": "configuring", 00:27:49.538 "raid_level": "raid5f", 00:27:49.538 "superblock": false, 00:27:49.538 "num_base_bdevs": 4, 00:27:49.538 "num_base_bdevs_discovered": 1, 00:27:49.538 "num_base_bdevs_operational": 4, 00:27:49.538 "base_bdevs_list": [ 00:27:49.538 { 00:27:49.538 "name": "BaseBdev1", 00:27:49.538 "uuid": "3f191ccb-3cb9-45d3-a426-eeb508da1b52", 00:27:49.538 "is_configured": true, 00:27:49.538 "data_offset": 0, 00:27:49.538 "data_size": 65536 00:27:49.538 }, 00:27:49.538 { 00:27:49.538 "name": "BaseBdev2", 00:27:49.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.538 "is_configured": false, 00:27:49.538 "data_offset": 0, 00:27:49.538 "data_size": 0 00:27:49.538 }, 00:27:49.538 { 00:27:49.538 "name": "BaseBdev3", 00:27:49.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.538 "is_configured": false, 00:27:49.538 "data_offset": 0, 00:27:49.538 "data_size": 0 00:27:49.538 }, 00:27:49.538 { 00:27:49.538 "name": "BaseBdev4", 00:27:49.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.538 "is_configured": false, 00:27:49.538 "data_offset": 0, 00:27:49.538 "data_size": 0 00:27:49.538 } 00:27:49.538 ] 00:27:49.538 }' 00:27:49.538 00:41:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:49.538 00:41:43 -- common/autotest_common.sh@10 -- # set +x 00:27:50.111 00:41:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:27:50.368 [2024-04-24 00:41:44.114991] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:50.368 BaseBdev2 00:27:50.368 00:41:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:27:50.368 00:41:44 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:27:50.368 00:41:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:50.368 00:41:44 -- common/autotest_common.sh@887 -- # local i 00:27:50.368 00:41:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:50.368 00:41:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:50.368 00:41:44 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:50.625 00:41:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:51.190 [ 00:27:51.190 { 00:27:51.190 "name": "BaseBdev2", 00:27:51.190 "aliases": [ 00:27:51.190 "5892c6d3-0614-4042-b264-c45fdb43b1a8" 00:27:51.190 ], 00:27:51.190 "product_name": "Malloc disk", 00:27:51.190 "block_size": 512, 00:27:51.190 "num_blocks": 65536, 00:27:51.190 "uuid": "5892c6d3-0614-4042-b264-c45fdb43b1a8", 00:27:51.190 "assigned_rate_limits": { 00:27:51.190 "rw_ios_per_sec": 0, 00:27:51.190 "rw_mbytes_per_sec": 0, 00:27:51.190 "r_mbytes_per_sec": 0, 00:27:51.190 "w_mbytes_per_sec": 0 00:27:51.190 }, 00:27:51.190 "claimed": true, 00:27:51.190 "claim_type": "exclusive_write", 00:27:51.190 "zoned": false, 00:27:51.190 "supported_io_types": { 00:27:51.190 "read": true, 00:27:51.190 "write": true, 00:27:51.190 "unmap": true, 00:27:51.190 "write_zeroes": true, 00:27:51.190 "flush": true, 00:27:51.190 "reset": true, 00:27:51.190 "compare": false, 00:27:51.190 "compare_and_write": false, 00:27:51.190 "abort": true, 00:27:51.190 "nvme_admin": false, 00:27:51.190 "nvme_io": false 00:27:51.190 }, 00:27:51.190 "memory_domains": [ 00:27:51.190 { 00:27:51.190 "dma_device_id": "system", 00:27:51.190 "dma_device_type": 1 00:27:51.190 }, 00:27:51.190 { 00:27:51.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:51.190 "dma_device_type": 2 00:27:51.190 } 00:27:51.190 ], 00:27:51.190 "driver_specific": {} 00:27:51.190 } 00:27:51.190 ] 00:27:51.190 00:41:44 -- common/autotest_common.sh@893 -- # return 0 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:51.190 "name": "Existed_Raid", 00:27:51.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.190 "strip_size_kb": 64, 00:27:51.190 "state": "configuring", 00:27:51.190 "raid_level": "raid5f", 00:27:51.190 "superblock": false, 00:27:51.190 "num_base_bdevs": 4, 00:27:51.190 "num_base_bdevs_discovered": 2, 00:27:51.190 "num_base_bdevs_operational": 4, 00:27:51.190 "base_bdevs_list": [ 00:27:51.190 { 00:27:51.190 "name": "BaseBdev1", 00:27:51.190 "uuid": "3f191ccb-3cb9-45d3-a426-eeb508da1b52", 00:27:51.190 "is_configured": true, 00:27:51.190 "data_offset": 0, 00:27:51.190 "data_size": 65536 00:27:51.190 }, 00:27:51.190 { 00:27:51.190 "name": "BaseBdev2", 00:27:51.190 "uuid": "5892c6d3-0614-4042-b264-c45fdb43b1a8", 00:27:51.190 "is_configured": true, 00:27:51.190 "data_offset": 0, 00:27:51.190 "data_size": 65536 00:27:51.190 }, 00:27:51.190 { 00:27:51.190 "name": "BaseBdev3", 00:27:51.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.190 "is_configured": false, 00:27:51.190 "data_offset": 0, 00:27:51.190 "data_size": 0 00:27:51.190 }, 00:27:51.190 { 00:27:51.190 "name": "BaseBdev4", 00:27:51.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.190 "is_configured": false, 00:27:51.190 "data_offset": 0, 00:27:51.190 "data_size": 0 00:27:51.190 } 00:27:51.190 ] 00:27:51.190 }' 00:27:51.190 00:41:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:51.190 00:41:44 -- common/autotest_common.sh@10 -- # set +x 00:27:51.785 00:41:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:27:52.352 [2024-04-24 00:41:45.845003] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:52.352 BaseBdev3 00:27:52.352 00:41:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:27:52.352 00:41:45 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:27:52.352 00:41:45 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:52.352 00:41:45 -- common/autotest_common.sh@887 -- # local i 00:27:52.352 00:41:45 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:52.352 00:41:45 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:52.352 00:41:45 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:52.611 00:41:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:52.870 [ 00:27:52.870 { 00:27:52.870 "name": "BaseBdev3", 00:27:52.870 "aliases": [ 00:27:52.870 "cf45fac2-fcbe-4cfb-b624-163343626e07" 00:27:52.870 ], 00:27:52.870 "product_name": "Malloc disk", 00:27:52.870 "block_size": 512, 00:27:52.870 "num_blocks": 65536, 00:27:52.870 "uuid": "cf45fac2-fcbe-4cfb-b624-163343626e07", 00:27:52.870 "assigned_rate_limits": { 00:27:52.870 "rw_ios_per_sec": 0, 00:27:52.870 "rw_mbytes_per_sec": 0, 00:27:52.870 "r_mbytes_per_sec": 0, 00:27:52.870 "w_mbytes_per_sec": 0 00:27:52.870 }, 00:27:52.870 "claimed": true, 00:27:52.870 "claim_type": "exclusive_write", 00:27:52.870 "zoned": false, 00:27:52.870 "supported_io_types": { 00:27:52.870 "read": true, 00:27:52.870 "write": true, 00:27:52.870 "unmap": true, 00:27:52.870 "write_zeroes": true, 00:27:52.870 "flush": true, 00:27:52.870 "reset": true, 00:27:52.870 "compare": false, 00:27:52.870 "compare_and_write": false, 00:27:52.870 "abort": true, 00:27:52.870 "nvme_admin": false, 00:27:52.870 "nvme_io": false 00:27:52.870 }, 00:27:52.870 "memory_domains": [ 00:27:52.870 { 00:27:52.870 "dma_device_id": "system", 00:27:52.870 "dma_device_type": 1 00:27:52.870 }, 00:27:52.870 { 00:27:52.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:52.870 "dma_device_type": 2 00:27:52.870 } 00:27:52.870 ], 00:27:52.870 "driver_specific": {} 00:27:52.870 } 00:27:52.870 ] 00:27:52.870 00:41:46 -- common/autotest_common.sh@893 -- # return 0 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.870 00:41:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:53.127 00:41:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:53.127 "name": "Existed_Raid", 00:27:53.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.127 "strip_size_kb": 64, 00:27:53.127 "state": "configuring", 00:27:53.127 "raid_level": "raid5f", 00:27:53.127 "superblock": false, 00:27:53.127 "num_base_bdevs": 4, 00:27:53.127 "num_base_bdevs_discovered": 3, 00:27:53.127 "num_base_bdevs_operational": 4, 00:27:53.127 "base_bdevs_list": [ 00:27:53.127 { 00:27:53.127 "name": "BaseBdev1", 00:27:53.127 "uuid": "3f191ccb-3cb9-45d3-a426-eeb508da1b52", 00:27:53.127 "is_configured": true, 00:27:53.127 "data_offset": 0, 00:27:53.127 "data_size": 65536 00:27:53.127 }, 00:27:53.127 { 00:27:53.127 "name": "BaseBdev2", 00:27:53.127 "uuid": "5892c6d3-0614-4042-b264-c45fdb43b1a8", 00:27:53.127 "is_configured": true, 00:27:53.127 "data_offset": 0, 00:27:53.127 "data_size": 65536 00:27:53.127 }, 00:27:53.127 { 00:27:53.127 "name": "BaseBdev3", 00:27:53.127 "uuid": "cf45fac2-fcbe-4cfb-b624-163343626e07", 00:27:53.127 "is_configured": true, 00:27:53.127 "data_offset": 0, 00:27:53.127 "data_size": 65536 00:27:53.127 }, 00:27:53.127 { 00:27:53.127 "name": "BaseBdev4", 00:27:53.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.128 "is_configured": false, 00:27:53.128 "data_offset": 0, 00:27:53.128 "data_size": 0 00:27:53.128 } 00:27:53.128 ] 00:27:53.128 }' 00:27:53.128 00:41:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:53.128 00:41:46 -- common/autotest_common.sh@10 -- # set +x 00:27:53.694 00:41:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:27:54.262 [2024-04-24 00:41:47.758139] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:54.262 [2024-04-24 00:41:47.758203] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:27:54.262 [2024-04-24 00:41:47.758212] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:27:54.262 [2024-04-24 00:41:47.758318] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:27:54.262 [2024-04-24 00:41:47.766229] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:27:54.262 [2024-04-24 00:41:47.766260] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:27:54.262 [2024-04-24 00:41:47.766551] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:54.262 BaseBdev4 00:27:54.262 00:41:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:27:54.262 00:41:47 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:27:54.262 00:41:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:27:54.262 00:41:47 -- common/autotest_common.sh@887 -- # local i 00:27:54.262 00:41:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:27:54.262 00:41:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:27:54.262 00:41:47 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:54.521 00:41:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:27:54.521 [ 00:27:54.521 { 00:27:54.521 "name": "BaseBdev4", 00:27:54.521 "aliases": [ 00:27:54.521 "c56ae760-d38a-4173-a2cd-9ce3709e98ff" 00:27:54.521 ], 00:27:54.521 "product_name": "Malloc disk", 00:27:54.521 "block_size": 512, 00:27:54.521 "num_blocks": 65536, 00:27:54.521 "uuid": "c56ae760-d38a-4173-a2cd-9ce3709e98ff", 00:27:54.521 "assigned_rate_limits": { 00:27:54.521 "rw_ios_per_sec": 0, 00:27:54.521 "rw_mbytes_per_sec": 0, 00:27:54.521 "r_mbytes_per_sec": 0, 00:27:54.521 "w_mbytes_per_sec": 0 00:27:54.521 }, 00:27:54.521 "claimed": true, 00:27:54.521 "claim_type": "exclusive_write", 00:27:54.521 "zoned": false, 00:27:54.521 "supported_io_types": { 00:27:54.521 "read": true, 00:27:54.521 "write": true, 00:27:54.521 "unmap": true, 00:27:54.521 "write_zeroes": true, 00:27:54.521 "flush": true, 00:27:54.521 "reset": true, 00:27:54.521 "compare": false, 00:27:54.521 "compare_and_write": false, 00:27:54.521 "abort": true, 00:27:54.521 "nvme_admin": false, 00:27:54.521 "nvme_io": false 00:27:54.521 }, 00:27:54.521 "memory_domains": [ 00:27:54.521 { 00:27:54.521 "dma_device_id": "system", 00:27:54.521 "dma_device_type": 1 00:27:54.521 }, 00:27:54.521 { 00:27:54.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:54.521 "dma_device_type": 2 00:27:54.521 } 00:27:54.521 ], 00:27:54.521 "driver_specific": {} 00:27:54.521 } 00:27:54.521 ] 00:27:54.779 00:41:48 -- common/autotest_common.sh@893 -- # return 0 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.779 00:41:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:55.037 00:41:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:55.038 "name": "Existed_Raid", 00:27:55.038 "uuid": "2d0ae3fc-749c-4f59-8dbc-af968912ce6f", 00:27:55.038 "strip_size_kb": 64, 00:27:55.038 "state": "online", 00:27:55.038 "raid_level": "raid5f", 00:27:55.038 "superblock": false, 00:27:55.038 "num_base_bdevs": 4, 00:27:55.038 "num_base_bdevs_discovered": 4, 00:27:55.038 "num_base_bdevs_operational": 4, 00:27:55.038 "base_bdevs_list": [ 00:27:55.038 { 00:27:55.038 "name": "BaseBdev1", 00:27:55.038 "uuid": "3f191ccb-3cb9-45d3-a426-eeb508da1b52", 00:27:55.038 "is_configured": true, 00:27:55.038 "data_offset": 0, 00:27:55.038 "data_size": 65536 00:27:55.038 }, 00:27:55.038 { 00:27:55.038 "name": "BaseBdev2", 00:27:55.038 "uuid": "5892c6d3-0614-4042-b264-c45fdb43b1a8", 00:27:55.038 "is_configured": true, 00:27:55.038 "data_offset": 0, 00:27:55.038 "data_size": 65536 00:27:55.038 }, 00:27:55.038 { 00:27:55.038 "name": "BaseBdev3", 00:27:55.038 "uuid": "cf45fac2-fcbe-4cfb-b624-163343626e07", 00:27:55.038 "is_configured": true, 00:27:55.038 "data_offset": 0, 00:27:55.038 "data_size": 65536 00:27:55.038 }, 00:27:55.038 { 00:27:55.038 "name": "BaseBdev4", 00:27:55.038 "uuid": "c56ae760-d38a-4173-a2cd-9ce3709e98ff", 00:27:55.038 "is_configured": true, 00:27:55.038 "data_offset": 0, 00:27:55.038 "data_size": 65536 00:27:55.038 } 00:27:55.038 ] 00:27:55.038 }' 00:27:55.038 00:41:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:55.038 00:41:48 -- common/autotest_common.sh@10 -- # set +x 00:27:55.603 00:41:49 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:55.860 [2024-04-24 00:41:49.560801] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@196 -- # return 0 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.118 00:41:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:56.392 00:41:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:56.392 "name": "Existed_Raid", 00:27:56.392 "uuid": "2d0ae3fc-749c-4f59-8dbc-af968912ce6f", 00:27:56.392 "strip_size_kb": 64, 00:27:56.392 "state": "online", 00:27:56.392 "raid_level": "raid5f", 00:27:56.392 "superblock": false, 00:27:56.392 "num_base_bdevs": 4, 00:27:56.392 "num_base_bdevs_discovered": 3, 00:27:56.392 "num_base_bdevs_operational": 3, 00:27:56.392 "base_bdevs_list": [ 00:27:56.392 { 00:27:56.392 "name": null, 00:27:56.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.392 "is_configured": false, 00:27:56.392 "data_offset": 0, 00:27:56.392 "data_size": 65536 00:27:56.392 }, 00:27:56.392 { 00:27:56.392 "name": "BaseBdev2", 00:27:56.392 "uuid": "5892c6d3-0614-4042-b264-c45fdb43b1a8", 00:27:56.392 "is_configured": true, 00:27:56.392 "data_offset": 0, 00:27:56.392 "data_size": 65536 00:27:56.392 }, 00:27:56.392 { 00:27:56.392 "name": "BaseBdev3", 00:27:56.392 "uuid": "cf45fac2-fcbe-4cfb-b624-163343626e07", 00:27:56.392 "is_configured": true, 00:27:56.392 "data_offset": 0, 00:27:56.392 "data_size": 65536 00:27:56.392 }, 00:27:56.392 { 00:27:56.392 "name": "BaseBdev4", 00:27:56.392 "uuid": "c56ae760-d38a-4173-a2cd-9ce3709e98ff", 00:27:56.392 "is_configured": true, 00:27:56.392 "data_offset": 0, 00:27:56.392 "data_size": 65536 00:27:56.392 } 00:27:56.392 ] 00:27:56.392 }' 00:27:56.392 00:41:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:56.392 00:41:50 -- common/autotest_common.sh@10 -- # set +x 00:27:56.957 00:41:50 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:27:56.957 00:41:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:56.957 00:41:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.957 00:41:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:57.523 00:41:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:57.523 00:41:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:57.523 00:41:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:57.523 [2024-04-24 00:41:51.303011] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:57.523 [2024-04-24 00:41:51.303112] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:57.781 [2024-04-24 00:41:51.412052] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:57.781 00:41:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:57.781 00:41:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:57.781 00:41:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.781 00:41:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:58.039 00:41:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:58.039 00:41:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:58.039 00:41:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:27:58.297 [2024-04-24 00:41:51.992332] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:58.555 00:41:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:58.555 00:41:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:58.555 00:41:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.555 00:41:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:27:58.813 00:41:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:27:58.813 00:41:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:58.813 00:41:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:27:59.070 [2024-04-24 00:41:52.657462] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:27:59.070 [2024-04-24 00:41:52.657527] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:27:59.070 00:41:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:27:59.070 00:41:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:27:59.070 00:41:52 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.070 00:41:52 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:27:59.328 00:41:53 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:27:59.328 00:41:53 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:27:59.328 00:41:53 -- bdev/bdev_raid.sh@287 -- # killprocess 138657 00:27:59.328 00:41:53 -- common/autotest_common.sh@936 -- # '[' -z 138657 ']' 00:27:59.328 00:41:53 -- common/autotest_common.sh@940 -- # kill -0 138657 00:27:59.328 00:41:53 -- common/autotest_common.sh@941 -- # uname 00:27:59.328 00:41:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:59.328 00:41:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138657 00:27:59.587 00:41:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:59.587 00:41:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:59.587 00:41:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138657' 00:27:59.587 killing process with pid 138657 00:27:59.587 00:41:53 -- common/autotest_common.sh@955 -- # kill 138657 00:27:59.587 [2024-04-24 00:41:53.124977] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:59.587 00:41:53 -- common/autotest_common.sh@960 -- # wait 138657 00:27:59.587 [2024-04-24 00:41:53.125104] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:00.959 00:41:54 -- bdev/bdev_raid.sh@289 -- # return 0 00:28:00.959 00:28:00.959 real 0m16.519s 00:28:00.959 user 0m28.691s 00:28:00.959 sys 0m2.199s 00:28:00.959 00:41:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:00.959 00:41:54 -- common/autotest_common.sh@10 -- # set +x 00:28:00.959 ************************************ 00:28:00.959 END TEST raid5f_state_function_test 00:28:00.959 ************************************ 00:28:00.959 00:41:54 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:28:00.959 00:41:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:28:00.959 00:41:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:00.959 00:41:54 -- common/autotest_common.sh@10 -- # set +x 00:28:00.959 ************************************ 00:28:00.959 START TEST raid5f_state_function_test_sb 00:28:00.959 ************************************ 00:28:00.959 00:41:54 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 4 true 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=139120 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 139120' 00:28:00.960 Process raid pid: 139120 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:28:00.960 00:41:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 139120 /var/tmp/spdk-raid.sock 00:28:00.960 00:41:54 -- common/autotest_common.sh@817 -- # '[' -z 139120 ']' 00:28:00.960 00:41:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:00.960 00:41:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:00.960 00:41:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:00.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:00.960 00:41:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:00.960 00:41:54 -- common/autotest_common.sh@10 -- # set +x 00:28:01.218 [2024-04-24 00:41:54.785937] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:28:01.218 [2024-04-24 00:41:54.786680] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.218 [2024-04-24 00:41:54.977388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.477 [2024-04-24 00:41:55.215897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.734 [2024-04-24 00:41:55.465565] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:01.992 00:41:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:01.992 00:41:55 -- common/autotest_common.sh@850 -- # return 0 00:28:01.992 00:41:55 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:02.250 [2024-04-24 00:41:55.915667] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:02.251 [2024-04-24 00:41:55.916079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:02.251 [2024-04-24 00:41:55.916240] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:02.251 [2024-04-24 00:41:55.916380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:02.251 [2024-04-24 00:41:55.916497] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:02.251 [2024-04-24 00:41:55.916691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:02.251 [2024-04-24 00:41:55.916790] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:02.251 [2024-04-24 00:41:55.916888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:02.251 00:41:55 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:02.251 00:41:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:02.251 00:41:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:02.251 00:41:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:02.251 00:41:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:02.251 00:41:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:02.251 00:41:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:02.251 00:41:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:02.251 00:41:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:02.251 00:41:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:02.251 00:41:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.251 00:41:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:02.508 00:41:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:02.508 "name": "Existed_Raid", 00:28:02.508 "uuid": "d85aa989-e341-43ff-a1be-815a0e081a1f", 00:28:02.509 "strip_size_kb": 64, 00:28:02.509 "state": "configuring", 00:28:02.509 "raid_level": "raid5f", 00:28:02.509 "superblock": true, 00:28:02.509 "num_base_bdevs": 4, 00:28:02.509 "num_base_bdevs_discovered": 0, 00:28:02.509 "num_base_bdevs_operational": 4, 00:28:02.509 "base_bdevs_list": [ 00:28:02.509 { 00:28:02.509 "name": "BaseBdev1", 00:28:02.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.509 "is_configured": false, 00:28:02.509 "data_offset": 0, 00:28:02.509 "data_size": 0 00:28:02.509 }, 00:28:02.509 { 00:28:02.509 "name": "BaseBdev2", 00:28:02.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.509 "is_configured": false, 00:28:02.509 "data_offset": 0, 00:28:02.509 "data_size": 0 00:28:02.509 }, 00:28:02.509 { 00:28:02.509 "name": "BaseBdev3", 00:28:02.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.509 "is_configured": false, 00:28:02.509 "data_offset": 0, 00:28:02.509 "data_size": 0 00:28:02.509 }, 00:28:02.509 { 00:28:02.509 "name": "BaseBdev4", 00:28:02.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.509 "is_configured": false, 00:28:02.509 "data_offset": 0, 00:28:02.509 "data_size": 0 00:28:02.509 } 00:28:02.509 ] 00:28:02.509 }' 00:28:02.509 00:41:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:02.509 00:41:56 -- common/autotest_common.sh@10 -- # set +x 00:28:03.441 00:41:56 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:03.441 [2024-04-24 00:41:57.151648] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:03.441 [2024-04-24 00:41:57.151930] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:28:03.441 00:41:57 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:03.698 [2024-04-24 00:41:57.427726] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:03.698 [2024-04-24 00:41:57.428029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:03.698 [2024-04-24 00:41:57.428130] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:03.698 [2024-04-24 00:41:57.428197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:03.698 [2024-04-24 00:41:57.428230] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:03.698 [2024-04-24 00:41:57.428297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:03.698 [2024-04-24 00:41:57.428393] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:03.698 [2024-04-24 00:41:57.428452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:03.698 00:41:57 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:03.978 [2024-04-24 00:41:57.727859] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:03.978 BaseBdev1 00:28:03.978 00:41:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:28:03.978 00:41:57 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:28:03.978 00:41:57 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:03.978 00:41:57 -- common/autotest_common.sh@887 -- # local i 00:28:03.978 00:41:57 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:03.978 00:41:57 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:03.978 00:41:57 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:04.259 00:41:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:04.516 [ 00:28:04.516 { 00:28:04.516 "name": "BaseBdev1", 00:28:04.516 "aliases": [ 00:28:04.516 "bf06e836-53ea-4064-ae30-f09d5970bbc7" 00:28:04.516 ], 00:28:04.516 "product_name": "Malloc disk", 00:28:04.516 "block_size": 512, 00:28:04.516 "num_blocks": 65536, 00:28:04.516 "uuid": "bf06e836-53ea-4064-ae30-f09d5970bbc7", 00:28:04.516 "assigned_rate_limits": { 00:28:04.516 "rw_ios_per_sec": 0, 00:28:04.516 "rw_mbytes_per_sec": 0, 00:28:04.516 "r_mbytes_per_sec": 0, 00:28:04.516 "w_mbytes_per_sec": 0 00:28:04.516 }, 00:28:04.516 "claimed": true, 00:28:04.516 "claim_type": "exclusive_write", 00:28:04.516 "zoned": false, 00:28:04.516 "supported_io_types": { 00:28:04.516 "read": true, 00:28:04.516 "write": true, 00:28:04.516 "unmap": true, 00:28:04.516 "write_zeroes": true, 00:28:04.516 "flush": true, 00:28:04.516 "reset": true, 00:28:04.516 "compare": false, 00:28:04.516 "compare_and_write": false, 00:28:04.516 "abort": true, 00:28:04.516 "nvme_admin": false, 00:28:04.516 "nvme_io": false 00:28:04.516 }, 00:28:04.516 "memory_domains": [ 00:28:04.516 { 00:28:04.516 "dma_device_id": "system", 00:28:04.516 "dma_device_type": 1 00:28:04.516 }, 00:28:04.516 { 00:28:04.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.516 "dma_device_type": 2 00:28:04.516 } 00:28:04.516 ], 00:28:04.516 "driver_specific": {} 00:28:04.516 } 00:28:04.516 ] 00:28:04.774 00:41:58 -- common/autotest_common.sh@893 -- # return 0 00:28:04.774 00:41:58 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:04.774 00:41:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:04.774 00:41:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:04.774 00:41:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:04.774 00:41:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:04.774 00:41:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:04.774 00:41:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:04.774 00:41:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:04.774 00:41:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:04.774 00:41:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:04.774 00:41:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.774 00:41:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.033 00:41:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:05.033 "name": "Existed_Raid", 00:28:05.033 "uuid": "08402e68-d02a-4621-96c0-b25217b8bd54", 00:28:05.033 "strip_size_kb": 64, 00:28:05.033 "state": "configuring", 00:28:05.033 "raid_level": "raid5f", 00:28:05.033 "superblock": true, 00:28:05.033 "num_base_bdevs": 4, 00:28:05.033 "num_base_bdevs_discovered": 1, 00:28:05.033 "num_base_bdevs_operational": 4, 00:28:05.033 "base_bdevs_list": [ 00:28:05.033 { 00:28:05.033 "name": "BaseBdev1", 00:28:05.033 "uuid": "bf06e836-53ea-4064-ae30-f09d5970bbc7", 00:28:05.033 "is_configured": true, 00:28:05.033 "data_offset": 2048, 00:28:05.033 "data_size": 63488 00:28:05.033 }, 00:28:05.033 { 00:28:05.033 "name": "BaseBdev2", 00:28:05.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.033 "is_configured": false, 00:28:05.033 "data_offset": 0, 00:28:05.033 "data_size": 0 00:28:05.033 }, 00:28:05.033 { 00:28:05.033 "name": "BaseBdev3", 00:28:05.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.033 "is_configured": false, 00:28:05.033 "data_offset": 0, 00:28:05.033 "data_size": 0 00:28:05.033 }, 00:28:05.033 { 00:28:05.033 "name": "BaseBdev4", 00:28:05.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.033 "is_configured": false, 00:28:05.033 "data_offset": 0, 00:28:05.033 "data_size": 0 00:28:05.033 } 00:28:05.033 ] 00:28:05.033 }' 00:28:05.033 00:41:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:05.033 00:41:58 -- common/autotest_common.sh@10 -- # set +x 00:28:05.598 00:41:59 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:28:05.857 [2024-04-24 00:41:59.496310] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:05.857 [2024-04-24 00:41:59.496609] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:28:05.857 00:41:59 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:28:05.857 00:41:59 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:06.114 00:41:59 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:06.409 BaseBdev1 00:28:06.409 00:42:00 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:28:06.409 00:42:00 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:28:06.409 00:42:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:06.409 00:42:00 -- common/autotest_common.sh@887 -- # local i 00:28:06.409 00:42:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:06.409 00:42:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:06.409 00:42:00 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:06.665 00:42:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:06.923 [ 00:28:06.923 { 00:28:06.923 "name": "BaseBdev1", 00:28:06.923 "aliases": [ 00:28:06.923 "33a512f7-07a1-4e9c-9396-62531b1d9bb2" 00:28:06.923 ], 00:28:06.923 "product_name": "Malloc disk", 00:28:06.923 "block_size": 512, 00:28:06.923 "num_blocks": 65536, 00:28:06.923 "uuid": "33a512f7-07a1-4e9c-9396-62531b1d9bb2", 00:28:06.923 "assigned_rate_limits": { 00:28:06.923 "rw_ios_per_sec": 0, 00:28:06.923 "rw_mbytes_per_sec": 0, 00:28:06.923 "r_mbytes_per_sec": 0, 00:28:06.923 "w_mbytes_per_sec": 0 00:28:06.923 }, 00:28:06.923 "claimed": false, 00:28:06.923 "zoned": false, 00:28:06.923 "supported_io_types": { 00:28:06.923 "read": true, 00:28:06.923 "write": true, 00:28:06.923 "unmap": true, 00:28:06.923 "write_zeroes": true, 00:28:06.923 "flush": true, 00:28:06.923 "reset": true, 00:28:06.923 "compare": false, 00:28:06.923 "compare_and_write": false, 00:28:06.923 "abort": true, 00:28:06.923 "nvme_admin": false, 00:28:06.923 "nvme_io": false 00:28:06.923 }, 00:28:06.923 "memory_domains": [ 00:28:06.923 { 00:28:06.923 "dma_device_id": "system", 00:28:06.923 "dma_device_type": 1 00:28:06.923 }, 00:28:06.923 { 00:28:06.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.923 "dma_device_type": 2 00:28:06.923 } 00:28:06.923 ], 00:28:06.923 "driver_specific": {} 00:28:06.923 } 00:28:06.923 ] 00:28:06.923 00:42:00 -- common/autotest_common.sh@893 -- # return 0 00:28:06.923 00:42:00 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:28:07.180 [2024-04-24 00:42:00.863791] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:07.180 [2024-04-24 00:42:00.866261] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:07.180 [2024-04-24 00:42:00.866490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:07.180 [2024-04-24 00:42:00.866585] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:07.180 [2024-04-24 00:42:00.866719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:07.180 [2024-04-24 00:42:00.866831] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:28:07.180 [2024-04-24 00:42:00.866886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:28:07.180 00:42:00 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:28:07.180 00:42:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:28:07.180 00:42:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:07.180 00:42:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:07.180 00:42:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:07.180 00:42:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:07.180 00:42:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:07.181 00:42:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:07.181 00:42:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:07.181 00:42:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:07.181 00:42:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:07.181 00:42:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:07.181 00:42:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.181 00:42:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:07.439 00:42:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:07.439 "name": "Existed_Raid", 00:28:07.439 "uuid": "8352b9f6-95c1-4e60-b04f-b13b0c190968", 00:28:07.439 "strip_size_kb": 64, 00:28:07.439 "state": "configuring", 00:28:07.439 "raid_level": "raid5f", 00:28:07.439 "superblock": true, 00:28:07.439 "num_base_bdevs": 4, 00:28:07.439 "num_base_bdevs_discovered": 1, 00:28:07.439 "num_base_bdevs_operational": 4, 00:28:07.439 "base_bdevs_list": [ 00:28:07.439 { 00:28:07.439 "name": "BaseBdev1", 00:28:07.439 "uuid": "33a512f7-07a1-4e9c-9396-62531b1d9bb2", 00:28:07.439 "is_configured": true, 00:28:07.439 "data_offset": 2048, 00:28:07.439 "data_size": 63488 00:28:07.439 }, 00:28:07.439 { 00:28:07.439 "name": "BaseBdev2", 00:28:07.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.439 "is_configured": false, 00:28:07.439 "data_offset": 0, 00:28:07.439 "data_size": 0 00:28:07.439 }, 00:28:07.439 { 00:28:07.439 "name": "BaseBdev3", 00:28:07.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.439 "is_configured": false, 00:28:07.439 "data_offset": 0, 00:28:07.439 "data_size": 0 00:28:07.439 }, 00:28:07.439 { 00:28:07.439 "name": "BaseBdev4", 00:28:07.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.439 "is_configured": false, 00:28:07.439 "data_offset": 0, 00:28:07.439 "data_size": 0 00:28:07.439 } 00:28:07.439 ] 00:28:07.439 }' 00:28:07.439 00:42:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:07.439 00:42:01 -- common/autotest_common.sh@10 -- # set +x 00:28:08.005 00:42:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:08.262 [2024-04-24 00:42:01.978102] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:08.262 BaseBdev2 00:28:08.262 00:42:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:28:08.262 00:42:01 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:28:08.262 00:42:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:08.262 00:42:01 -- common/autotest_common.sh@887 -- # local i 00:28:08.262 00:42:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:08.262 00:42:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:08.262 00:42:01 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:08.520 00:42:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:08.777 [ 00:28:08.777 { 00:28:08.777 "name": "BaseBdev2", 00:28:08.777 "aliases": [ 00:28:08.777 "deabc056-548f-4d07-92c1-c0072c02b05b" 00:28:08.777 ], 00:28:08.777 "product_name": "Malloc disk", 00:28:08.777 "block_size": 512, 00:28:08.777 "num_blocks": 65536, 00:28:08.777 "uuid": "deabc056-548f-4d07-92c1-c0072c02b05b", 00:28:08.777 "assigned_rate_limits": { 00:28:08.777 "rw_ios_per_sec": 0, 00:28:08.777 "rw_mbytes_per_sec": 0, 00:28:08.777 "r_mbytes_per_sec": 0, 00:28:08.777 "w_mbytes_per_sec": 0 00:28:08.777 }, 00:28:08.777 "claimed": true, 00:28:08.777 "claim_type": "exclusive_write", 00:28:08.777 "zoned": false, 00:28:08.777 "supported_io_types": { 00:28:08.777 "read": true, 00:28:08.777 "write": true, 00:28:08.777 "unmap": true, 00:28:08.777 "write_zeroes": true, 00:28:08.777 "flush": true, 00:28:08.777 "reset": true, 00:28:08.777 "compare": false, 00:28:08.777 "compare_and_write": false, 00:28:08.777 "abort": true, 00:28:08.777 "nvme_admin": false, 00:28:08.777 "nvme_io": false 00:28:08.777 }, 00:28:08.777 "memory_domains": [ 00:28:08.777 { 00:28:08.777 "dma_device_id": "system", 00:28:08.777 "dma_device_type": 1 00:28:08.777 }, 00:28:08.777 { 00:28:08.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:08.777 "dma_device_type": 2 00:28:08.777 } 00:28:08.777 ], 00:28:08.777 "driver_specific": {} 00:28:08.777 } 00:28:08.777 ] 00:28:08.777 00:42:02 -- common/autotest_common.sh@893 -- # return 0 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.777 00:42:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:09.034 00:42:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:09.034 "name": "Existed_Raid", 00:28:09.034 "uuid": "8352b9f6-95c1-4e60-b04f-b13b0c190968", 00:28:09.034 "strip_size_kb": 64, 00:28:09.034 "state": "configuring", 00:28:09.034 "raid_level": "raid5f", 00:28:09.034 "superblock": true, 00:28:09.034 "num_base_bdevs": 4, 00:28:09.034 "num_base_bdevs_discovered": 2, 00:28:09.034 "num_base_bdevs_operational": 4, 00:28:09.034 "base_bdevs_list": [ 00:28:09.034 { 00:28:09.034 "name": "BaseBdev1", 00:28:09.034 "uuid": "33a512f7-07a1-4e9c-9396-62531b1d9bb2", 00:28:09.034 "is_configured": true, 00:28:09.034 "data_offset": 2048, 00:28:09.034 "data_size": 63488 00:28:09.034 }, 00:28:09.034 { 00:28:09.034 "name": "BaseBdev2", 00:28:09.034 "uuid": "deabc056-548f-4d07-92c1-c0072c02b05b", 00:28:09.034 "is_configured": true, 00:28:09.034 "data_offset": 2048, 00:28:09.034 "data_size": 63488 00:28:09.034 }, 00:28:09.034 { 00:28:09.034 "name": "BaseBdev3", 00:28:09.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.034 "is_configured": false, 00:28:09.034 "data_offset": 0, 00:28:09.034 "data_size": 0 00:28:09.034 }, 00:28:09.034 { 00:28:09.034 "name": "BaseBdev4", 00:28:09.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.034 "is_configured": false, 00:28:09.034 "data_offset": 0, 00:28:09.034 "data_size": 0 00:28:09.034 } 00:28:09.034 ] 00:28:09.034 }' 00:28:09.034 00:42:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:09.034 00:42:02 -- common/autotest_common.sh@10 -- # set +x 00:28:09.598 00:42:03 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:09.856 [2024-04-24 00:42:03.581081] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:09.856 BaseBdev3 00:28:09.856 00:42:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:28:09.856 00:42:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:28:09.856 00:42:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:09.856 00:42:03 -- common/autotest_common.sh@887 -- # local i 00:28:09.856 00:42:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:09.856 00:42:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:09.856 00:42:03 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:10.113 00:42:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:10.370 [ 00:28:10.370 { 00:28:10.370 "name": "BaseBdev3", 00:28:10.370 "aliases": [ 00:28:10.370 "2969d649-eb2d-4859-934b-e26b587ff62d" 00:28:10.370 ], 00:28:10.370 "product_name": "Malloc disk", 00:28:10.370 "block_size": 512, 00:28:10.370 "num_blocks": 65536, 00:28:10.370 "uuid": "2969d649-eb2d-4859-934b-e26b587ff62d", 00:28:10.370 "assigned_rate_limits": { 00:28:10.370 "rw_ios_per_sec": 0, 00:28:10.370 "rw_mbytes_per_sec": 0, 00:28:10.370 "r_mbytes_per_sec": 0, 00:28:10.370 "w_mbytes_per_sec": 0 00:28:10.370 }, 00:28:10.370 "claimed": true, 00:28:10.370 "claim_type": "exclusive_write", 00:28:10.370 "zoned": false, 00:28:10.370 "supported_io_types": { 00:28:10.370 "read": true, 00:28:10.370 "write": true, 00:28:10.370 "unmap": true, 00:28:10.370 "write_zeroes": true, 00:28:10.370 "flush": true, 00:28:10.370 "reset": true, 00:28:10.370 "compare": false, 00:28:10.370 "compare_and_write": false, 00:28:10.370 "abort": true, 00:28:10.370 "nvme_admin": false, 00:28:10.370 "nvme_io": false 00:28:10.370 }, 00:28:10.370 "memory_domains": [ 00:28:10.370 { 00:28:10.370 "dma_device_id": "system", 00:28:10.370 "dma_device_type": 1 00:28:10.370 }, 00:28:10.370 { 00:28:10.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.370 "dma_device_type": 2 00:28:10.370 } 00:28:10.370 ], 00:28:10.370 "driver_specific": {} 00:28:10.370 } 00:28:10.370 ] 00:28:10.370 00:42:04 -- common/autotest_common.sh@893 -- # return 0 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.370 00:42:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:10.628 00:42:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:10.628 "name": "Existed_Raid", 00:28:10.628 "uuid": "8352b9f6-95c1-4e60-b04f-b13b0c190968", 00:28:10.628 "strip_size_kb": 64, 00:28:10.628 "state": "configuring", 00:28:10.628 "raid_level": "raid5f", 00:28:10.628 "superblock": true, 00:28:10.628 "num_base_bdevs": 4, 00:28:10.628 "num_base_bdevs_discovered": 3, 00:28:10.628 "num_base_bdevs_operational": 4, 00:28:10.628 "base_bdevs_list": [ 00:28:10.628 { 00:28:10.628 "name": "BaseBdev1", 00:28:10.628 "uuid": "33a512f7-07a1-4e9c-9396-62531b1d9bb2", 00:28:10.628 "is_configured": true, 00:28:10.628 "data_offset": 2048, 00:28:10.628 "data_size": 63488 00:28:10.628 }, 00:28:10.628 { 00:28:10.628 "name": "BaseBdev2", 00:28:10.628 "uuid": "deabc056-548f-4d07-92c1-c0072c02b05b", 00:28:10.628 "is_configured": true, 00:28:10.628 "data_offset": 2048, 00:28:10.628 "data_size": 63488 00:28:10.628 }, 00:28:10.628 { 00:28:10.628 "name": "BaseBdev3", 00:28:10.628 "uuid": "2969d649-eb2d-4859-934b-e26b587ff62d", 00:28:10.628 "is_configured": true, 00:28:10.628 "data_offset": 2048, 00:28:10.628 "data_size": 63488 00:28:10.628 }, 00:28:10.628 { 00:28:10.628 "name": "BaseBdev4", 00:28:10.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.628 "is_configured": false, 00:28:10.628 "data_offset": 0, 00:28:10.628 "data_size": 0 00:28:10.628 } 00:28:10.628 ] 00:28:10.628 }' 00:28:10.628 00:42:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:10.628 00:42:04 -- common/autotest_common.sh@10 -- # set +x 00:28:11.192 00:42:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:11.757 [2024-04-24 00:42:05.326348] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:11.757 [2024-04-24 00:42:05.326866] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:28:11.758 [2024-04-24 00:42:05.327073] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:11.758 [2024-04-24 00:42:05.327348] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:28:11.758 BaseBdev4 00:28:11.758 [2024-04-24 00:42:05.336455] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:28:11.758 [2024-04-24 00:42:05.336730] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:28:11.758 [2024-04-24 00:42:05.337051] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:11.758 00:42:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:28:11.758 00:42:05 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:28:11.758 00:42:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:28:11.758 00:42:05 -- common/autotest_common.sh@887 -- # local i 00:28:11.758 00:42:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:28:11.758 00:42:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:28:11.758 00:42:05 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:28:12.014 00:42:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:28:12.272 [ 00:28:12.272 { 00:28:12.272 "name": "BaseBdev4", 00:28:12.272 "aliases": [ 00:28:12.272 "b44d1ad4-9817-4852-9e48-67ff4e79098d" 00:28:12.272 ], 00:28:12.272 "product_name": "Malloc disk", 00:28:12.272 "block_size": 512, 00:28:12.272 "num_blocks": 65536, 00:28:12.272 "uuid": "b44d1ad4-9817-4852-9e48-67ff4e79098d", 00:28:12.272 "assigned_rate_limits": { 00:28:12.272 "rw_ios_per_sec": 0, 00:28:12.272 "rw_mbytes_per_sec": 0, 00:28:12.272 "r_mbytes_per_sec": 0, 00:28:12.272 "w_mbytes_per_sec": 0 00:28:12.272 }, 00:28:12.272 "claimed": true, 00:28:12.272 "claim_type": "exclusive_write", 00:28:12.272 "zoned": false, 00:28:12.272 "supported_io_types": { 00:28:12.272 "read": true, 00:28:12.272 "write": true, 00:28:12.272 "unmap": true, 00:28:12.272 "write_zeroes": true, 00:28:12.272 "flush": true, 00:28:12.272 "reset": true, 00:28:12.272 "compare": false, 00:28:12.272 "compare_and_write": false, 00:28:12.272 "abort": true, 00:28:12.272 "nvme_admin": false, 00:28:12.272 "nvme_io": false 00:28:12.272 }, 00:28:12.272 "memory_domains": [ 00:28:12.272 { 00:28:12.272 "dma_device_id": "system", 00:28:12.272 "dma_device_type": 1 00:28:12.272 }, 00:28:12.272 { 00:28:12.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:12.272 "dma_device_type": 2 00:28:12.272 } 00:28:12.272 ], 00:28:12.272 "driver_specific": {} 00:28:12.272 } 00:28:12.272 ] 00:28:12.272 00:42:05 -- common/autotest_common.sh@893 -- # return 0 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.272 00:42:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:12.529 00:42:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:12.529 "name": "Existed_Raid", 00:28:12.529 "uuid": "8352b9f6-95c1-4e60-b04f-b13b0c190968", 00:28:12.529 "strip_size_kb": 64, 00:28:12.529 "state": "online", 00:28:12.529 "raid_level": "raid5f", 00:28:12.529 "superblock": true, 00:28:12.530 "num_base_bdevs": 4, 00:28:12.530 "num_base_bdevs_discovered": 4, 00:28:12.530 "num_base_bdevs_operational": 4, 00:28:12.530 "base_bdevs_list": [ 00:28:12.530 { 00:28:12.530 "name": "BaseBdev1", 00:28:12.530 "uuid": "33a512f7-07a1-4e9c-9396-62531b1d9bb2", 00:28:12.530 "is_configured": true, 00:28:12.530 "data_offset": 2048, 00:28:12.530 "data_size": 63488 00:28:12.530 }, 00:28:12.530 { 00:28:12.530 "name": "BaseBdev2", 00:28:12.530 "uuid": "deabc056-548f-4d07-92c1-c0072c02b05b", 00:28:12.530 "is_configured": true, 00:28:12.530 "data_offset": 2048, 00:28:12.530 "data_size": 63488 00:28:12.530 }, 00:28:12.530 { 00:28:12.530 "name": "BaseBdev3", 00:28:12.530 "uuid": "2969d649-eb2d-4859-934b-e26b587ff62d", 00:28:12.530 "is_configured": true, 00:28:12.530 "data_offset": 2048, 00:28:12.530 "data_size": 63488 00:28:12.530 }, 00:28:12.530 { 00:28:12.530 "name": "BaseBdev4", 00:28:12.530 "uuid": "b44d1ad4-9817-4852-9e48-67ff4e79098d", 00:28:12.530 "is_configured": true, 00:28:12.530 "data_offset": 2048, 00:28:12.530 "data_size": 63488 00:28:12.530 } 00:28:12.530 ] 00:28:12.530 }' 00:28:12.530 00:42:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:12.530 00:42:06 -- common/autotest_common.sh@10 -- # set +x 00:28:13.461 00:42:06 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:28:13.461 [2024-04-24 00:42:07.207450] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@196 -- # return 0 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:13.718 00:42:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.974 00:42:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:13.974 "name": "Existed_Raid", 00:28:13.974 "uuid": "8352b9f6-95c1-4e60-b04f-b13b0c190968", 00:28:13.974 "strip_size_kb": 64, 00:28:13.974 "state": "online", 00:28:13.975 "raid_level": "raid5f", 00:28:13.975 "superblock": true, 00:28:13.975 "num_base_bdevs": 4, 00:28:13.975 "num_base_bdevs_discovered": 3, 00:28:13.975 "num_base_bdevs_operational": 3, 00:28:13.975 "base_bdevs_list": [ 00:28:13.975 { 00:28:13.975 "name": null, 00:28:13.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.975 "is_configured": false, 00:28:13.975 "data_offset": 2048, 00:28:13.975 "data_size": 63488 00:28:13.975 }, 00:28:13.975 { 00:28:13.975 "name": "BaseBdev2", 00:28:13.975 "uuid": "deabc056-548f-4d07-92c1-c0072c02b05b", 00:28:13.975 "is_configured": true, 00:28:13.975 "data_offset": 2048, 00:28:13.975 "data_size": 63488 00:28:13.975 }, 00:28:13.975 { 00:28:13.975 "name": "BaseBdev3", 00:28:13.975 "uuid": "2969d649-eb2d-4859-934b-e26b587ff62d", 00:28:13.975 "is_configured": true, 00:28:13.975 "data_offset": 2048, 00:28:13.975 "data_size": 63488 00:28:13.975 }, 00:28:13.975 { 00:28:13.975 "name": "BaseBdev4", 00:28:13.975 "uuid": "b44d1ad4-9817-4852-9e48-67ff4e79098d", 00:28:13.975 "is_configured": true, 00:28:13.975 "data_offset": 2048, 00:28:13.975 "data_size": 63488 00:28:13.975 } 00:28:13.975 ] 00:28:13.975 }' 00:28:13.975 00:42:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:13.975 00:42:07 -- common/autotest_common.sh@10 -- # set +x 00:28:14.907 00:42:08 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:28:14.907 00:42:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:14.907 00:42:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.907 00:42:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:28:14.907 00:42:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:28:14.907 00:42:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:14.907 00:42:08 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:15.473 [2024-04-24 00:42:09.014100] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:15.473 [2024-04-24 00:42:09.014514] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:15.473 [2024-04-24 00:42:09.127511] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:15.473 00:42:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:28:15.473 00:42:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:15.473 00:42:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.473 00:42:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:28:15.731 00:42:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:28:15.731 00:42:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:15.731 00:42:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:28:15.988 [2024-04-24 00:42:09.639830] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:15.988 00:42:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:28:15.988 00:42:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:15.988 00:42:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.988 00:42:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:28:16.554 00:42:10 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:28:16.554 00:42:10 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:16.554 00:42:10 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:28:16.813 [2024-04-24 00:42:10.469396] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:28:16.813 [2024-04-24 00:42:10.469700] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:28:16.813 00:42:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:28:16.813 00:42:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:28:16.813 00:42:10 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.813 00:42:10 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:28:17.070 00:42:10 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:28:17.070 00:42:10 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:28:17.070 00:42:10 -- bdev/bdev_raid.sh@287 -- # killprocess 139120 00:28:17.070 00:42:10 -- common/autotest_common.sh@936 -- # '[' -z 139120 ']' 00:28:17.070 00:42:10 -- common/autotest_common.sh@940 -- # kill -0 139120 00:28:17.070 00:42:10 -- common/autotest_common.sh@941 -- # uname 00:28:17.070 00:42:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:17.070 00:42:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139120 00:28:17.327 00:42:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:17.327 00:42:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:17.327 00:42:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139120' 00:28:17.327 killing process with pid 139120 00:28:17.327 00:42:10 -- common/autotest_common.sh@955 -- # kill 139120 00:28:17.327 [2024-04-24 00:42:10.871175] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:17.327 00:42:10 -- common/autotest_common.sh@960 -- # wait 139120 00:28:17.327 [2024-04-24 00:42:10.871328] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:18.696 ************************************ 00:28:18.696 END TEST raid5f_state_function_test_sb 00:28:18.696 ************************************ 00:28:18.696 00:42:12 -- bdev/bdev_raid.sh@289 -- # return 0 00:28:18.696 00:28:18.696 real 0m17.636s 00:28:18.696 user 0m30.797s 00:28:18.696 sys 0m2.171s 00:28:18.696 00:42:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:18.696 00:42:12 -- common/autotest_common.sh@10 -- # set +x 00:28:18.696 00:42:12 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:28:18.696 00:42:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:28:18.696 00:42:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:18.696 00:42:12 -- common/autotest_common.sh@10 -- # set +x 00:28:18.696 ************************************ 00:28:18.696 START TEST raid5f_superblock_test 00:28:18.696 ************************************ 00:28:18.696 00:42:12 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid5f 4 00:28:18.696 00:42:12 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:28:18.696 00:42:12 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:28:18.696 00:42:12 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@357 -- # raid_pid=139593 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:18.697 00:42:12 -- bdev/bdev_raid.sh@358 -- # waitforlisten 139593 /var/tmp/spdk-raid.sock 00:28:18.697 00:42:12 -- common/autotest_common.sh@817 -- # '[' -z 139593 ']' 00:28:18.697 00:42:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:18.697 00:42:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:18.697 00:42:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:18.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:18.697 00:42:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:18.697 00:42:12 -- common/autotest_common.sh@10 -- # set +x 00:28:18.955 [2024-04-24 00:42:12.491637] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:28:18.955 [2024-04-24 00:42:12.492092] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139593 ] 00:28:18.955 [2024-04-24 00:42:12.674724] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.212 [2024-04-24 00:42:12.930663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.470 [2024-04-24 00:42:13.165294] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:19.728 00:42:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:19.728 00:42:13 -- common/autotest_common.sh@850 -- # return 0 00:28:19.728 00:42:13 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:28:19.728 00:42:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:19.728 00:42:13 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:28:19.728 00:42:13 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:28:19.728 00:42:13 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:19.728 00:42:13 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:19.728 00:42:13 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:28:19.728 00:42:13 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:19.728 00:42:13 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:28:20.293 malloc1 00:28:20.293 00:42:13 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:20.551 [2024-04-24 00:42:14.129957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:20.551 [2024-04-24 00:42:14.130321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:20.551 [2024-04-24 00:42:14.130470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:28:20.551 [2024-04-24 00:42:14.130604] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:20.551 [2024-04-24 00:42:14.133536] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:20.551 [2024-04-24 00:42:14.133771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:20.551 pt1 00:28:20.551 00:42:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:28:20.551 00:42:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:20.551 00:42:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:28:20.551 00:42:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:28:20.551 00:42:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:20.551 00:42:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:20.551 00:42:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:28:20.551 00:42:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:20.551 00:42:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:28:20.809 malloc2 00:28:20.809 00:42:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:21.068 [2024-04-24 00:42:14.823679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:21.068 [2024-04-24 00:42:14.824054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:21.068 [2024-04-24 00:42:14.824208] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:21.068 [2024-04-24 00:42:14.824343] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:21.068 [2024-04-24 00:42:14.827064] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:21.068 [2024-04-24 00:42:14.827298] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:21.068 pt2 00:28:21.068 00:42:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:28:21.068 00:42:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:21.068 00:42:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:28:21.068 00:42:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:28:21.068 00:42:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:21.068 00:42:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:21.068 00:42:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:28:21.068 00:42:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:21.068 00:42:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:28:21.326 malloc3 00:28:21.583 00:42:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:21.841 [2024-04-24 00:42:15.409403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:21.841 [2024-04-24 00:42:15.409742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:21.841 [2024-04-24 00:42:15.409916] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:28:21.841 [2024-04-24 00:42:15.410050] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:21.841 [2024-04-24 00:42:15.412857] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:21.841 [2024-04-24 00:42:15.413103] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:21.841 pt3 00:28:21.841 00:42:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:28:21.841 00:42:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:21.841 00:42:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:28:21.841 00:42:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:28:21.841 00:42:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:28:21.841 00:42:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:21.841 00:42:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:28:21.841 00:42:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:21.841 00:42:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:28:22.099 malloc4 00:28:22.099 00:42:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:22.365 [2024-04-24 00:42:16.047093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:22.365 [2024-04-24 00:42:16.047428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.365 [2024-04-24 00:42:16.047575] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:28:22.365 [2024-04-24 00:42:16.047701] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.365 [2024-04-24 00:42:16.050473] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.365 [2024-04-24 00:42:16.050699] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:22.365 pt4 00:28:22.365 00:42:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:28:22.365 00:42:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:28:22.365 00:42:16 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:28:22.645 [2024-04-24 00:42:16.363171] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:22.645 [2024-04-24 00:42:16.365692] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:22.645 [2024-04-24 00:42:16.365978] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:22.645 [2024-04-24 00:42:16.366167] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:22.645 [2024-04-24 00:42:16.366510] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:28:22.645 [2024-04-24 00:42:16.366626] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:22.646 [2024-04-24 00:42:16.366798] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:28:22.646 [2024-04-24 00:42:16.375624] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:28:22.646 [2024-04-24 00:42:16.375863] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:28:22.646 [2024-04-24 00:42:16.376247] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:22.646 00:42:16 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:22.646 00:42:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:22.646 00:42:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:22.646 00:42:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:22.646 00:42:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:22.646 00:42:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:22.646 00:42:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:22.646 00:42:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:22.646 00:42:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:22.646 00:42:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:22.646 00:42:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.646 00:42:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.903 00:42:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:22.903 "name": "raid_bdev1", 00:28:22.903 "uuid": "85e53ea6-08ac-42c4-889c-48a98f4143d1", 00:28:22.903 "strip_size_kb": 64, 00:28:22.903 "state": "online", 00:28:22.903 "raid_level": "raid5f", 00:28:22.903 "superblock": true, 00:28:22.903 "num_base_bdevs": 4, 00:28:22.903 "num_base_bdevs_discovered": 4, 00:28:22.903 "num_base_bdevs_operational": 4, 00:28:22.903 "base_bdevs_list": [ 00:28:22.903 { 00:28:22.903 "name": "pt1", 00:28:22.903 "uuid": "82a4b5f7-2121-5298-898f-15eeaf2d1bb8", 00:28:22.904 "is_configured": true, 00:28:22.904 "data_offset": 2048, 00:28:22.904 "data_size": 63488 00:28:22.904 }, 00:28:22.904 { 00:28:22.904 "name": "pt2", 00:28:22.904 "uuid": "f368a2f3-ac5b-5bd4-aefa-7947926ea6e0", 00:28:22.904 "is_configured": true, 00:28:22.904 "data_offset": 2048, 00:28:22.904 "data_size": 63488 00:28:22.904 }, 00:28:22.904 { 00:28:22.904 "name": "pt3", 00:28:22.904 "uuid": "95bee972-420c-587e-9916-bdd1e9ed29c7", 00:28:22.904 "is_configured": true, 00:28:22.904 "data_offset": 2048, 00:28:22.904 "data_size": 63488 00:28:22.904 }, 00:28:22.904 { 00:28:22.904 "name": "pt4", 00:28:22.904 "uuid": "9422085f-9a42-5b0e-946b-bb102e16346e", 00:28:22.904 "is_configured": true, 00:28:22.904 "data_offset": 2048, 00:28:22.904 "data_size": 63488 00:28:22.904 } 00:28:22.904 ] 00:28:22.904 }' 00:28:22.904 00:42:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:22.904 00:42:16 -- common/autotest_common.sh@10 -- # set +x 00:28:23.837 00:42:17 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:28:23.837 00:42:17 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:23.837 [2024-04-24 00:42:17.474803] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:23.837 00:42:17 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=85e53ea6-08ac-42c4-889c-48a98f4143d1 00:28:23.837 00:42:17 -- bdev/bdev_raid.sh@380 -- # '[' -z 85e53ea6-08ac-42c4-889c-48a98f4143d1 ']' 00:28:23.837 00:42:17 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:24.096 [2024-04-24 00:42:17.698649] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:24.096 [2024-04-24 00:42:17.698941] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:24.096 [2024-04-24 00:42:17.699148] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:24.096 [2024-04-24 00:42:17.699328] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:24.096 [2024-04-24 00:42:17.699417] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:28:24.096 00:42:17 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:28:24.096 00:42:17 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.406 00:42:17 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:28:24.406 00:42:17 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:28:24.406 00:42:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:28:24.406 00:42:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:24.663 00:42:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:28:24.663 00:42:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:24.921 00:42:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:28:24.921 00:42:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:25.177 00:42:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:28:25.177 00:42:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:28:25.177 00:42:18 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:28:25.177 00:42:18 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:25.742 00:42:19 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:28:25.742 00:42:19 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:28:25.742 00:42:19 -- common/autotest_common.sh@638 -- # local es=0 00:28:25.742 00:42:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:28:25.742 00:42:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:25.742 00:42:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:25.742 00:42:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:25.742 00:42:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:25.742 00:42:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:25.742 00:42:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:25.742 00:42:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:25.742 00:42:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:25.742 00:42:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:28:25.742 [2024-04-24 00:42:19.459021] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:25.742 [2024-04-24 00:42:19.461547] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:25.742 [2024-04-24 00:42:19.461865] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:28:25.742 [2024-04-24 00:42:19.461942] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:28:25.742 [2024-04-24 00:42:19.462094] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:28:25.742 [2024-04-24 00:42:19.462209] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:28:25.742 [2024-04-24 00:42:19.462391] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:28:25.742 [2024-04-24 00:42:19.462574] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:28:25.742 [2024-04-24 00:42:19.462714] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:25.742 [2024-04-24 00:42:19.462757] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:28:25.742 request: 00:28:25.742 { 00:28:25.742 "name": "raid_bdev1", 00:28:25.742 "raid_level": "raid5f", 00:28:25.742 "base_bdevs": [ 00:28:25.742 "malloc1", 00:28:25.742 "malloc2", 00:28:25.742 "malloc3", 00:28:25.742 "malloc4" 00:28:25.742 ], 00:28:25.742 "superblock": false, 00:28:25.742 "strip_size_kb": 64, 00:28:25.742 "method": "bdev_raid_create", 00:28:25.742 "req_id": 1 00:28:25.742 } 00:28:25.742 Got JSON-RPC error response 00:28:25.742 response: 00:28:25.742 { 00:28:25.742 "code": -17, 00:28:25.742 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:25.742 } 00:28:25.742 00:42:19 -- common/autotest_common.sh@641 -- # es=1 00:28:25.742 00:42:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:25.742 00:42:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:25.742 00:42:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:25.742 00:42:19 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.742 00:42:19 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:28:26.308 00:42:19 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:28:26.308 00:42:19 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:28:26.308 00:42:19 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:26.308 [2024-04-24 00:42:19.991267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:26.308 [2024-04-24 00:42:19.991614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.308 [2024-04-24 00:42:19.991756] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:28:26.308 [2024-04-24 00:42:19.991862] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.308 [2024-04-24 00:42:19.994606] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.308 [2024-04-24 00:42:19.994861] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:26.308 [2024-04-24 00:42:19.995140] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:28:26.308 [2024-04-24 00:42:19.995313] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:26.308 pt1 00:28:26.308 00:42:20 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:28:26.308 00:42:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:26.308 00:42:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:26.308 00:42:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:26.308 00:42:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:26.308 00:42:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:26.308 00:42:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:26.308 00:42:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:26.308 00:42:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:26.308 00:42:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:26.308 00:42:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.308 00:42:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.566 00:42:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:26.566 "name": "raid_bdev1", 00:28:26.566 "uuid": "85e53ea6-08ac-42c4-889c-48a98f4143d1", 00:28:26.566 "strip_size_kb": 64, 00:28:26.566 "state": "configuring", 00:28:26.566 "raid_level": "raid5f", 00:28:26.566 "superblock": true, 00:28:26.566 "num_base_bdevs": 4, 00:28:26.566 "num_base_bdevs_discovered": 1, 00:28:26.566 "num_base_bdevs_operational": 4, 00:28:26.566 "base_bdevs_list": [ 00:28:26.566 { 00:28:26.566 "name": "pt1", 00:28:26.566 "uuid": "82a4b5f7-2121-5298-898f-15eeaf2d1bb8", 00:28:26.566 "is_configured": true, 00:28:26.566 "data_offset": 2048, 00:28:26.566 "data_size": 63488 00:28:26.566 }, 00:28:26.566 { 00:28:26.566 "name": null, 00:28:26.566 "uuid": "f368a2f3-ac5b-5bd4-aefa-7947926ea6e0", 00:28:26.566 "is_configured": false, 00:28:26.566 "data_offset": 2048, 00:28:26.566 "data_size": 63488 00:28:26.566 }, 00:28:26.566 { 00:28:26.566 "name": null, 00:28:26.566 "uuid": "95bee972-420c-587e-9916-bdd1e9ed29c7", 00:28:26.566 "is_configured": false, 00:28:26.566 "data_offset": 2048, 00:28:26.566 "data_size": 63488 00:28:26.566 }, 00:28:26.566 { 00:28:26.566 "name": null, 00:28:26.566 "uuid": "9422085f-9a42-5b0e-946b-bb102e16346e", 00:28:26.566 "is_configured": false, 00:28:26.566 "data_offset": 2048, 00:28:26.566 "data_size": 63488 00:28:26.566 } 00:28:26.566 ] 00:28:26.566 }' 00:28:26.566 00:42:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:26.566 00:42:20 -- common/autotest_common.sh@10 -- # set +x 00:28:27.499 00:42:21 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:28:27.499 00:42:21 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:27.499 [2024-04-24 00:42:21.271942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:27.499 [2024-04-24 00:42:21.272268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:27.499 [2024-04-24 00:42:21.272421] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:28:27.499 [2024-04-24 00:42:21.272535] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:27.499 [2024-04-24 00:42:21.273111] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:27.499 [2024-04-24 00:42:21.273301] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:27.499 [2024-04-24 00:42:21.273532] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:27.499 [2024-04-24 00:42:21.273652] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:27.499 pt2 00:28:27.757 00:42:21 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:28.037 [2024-04-24 00:42:21.616102] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:28:28.037 00:42:21 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:28:28.037 00:42:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:28.037 00:42:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:28.037 00:42:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:28.037 00:42:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:28.037 00:42:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:28.037 00:42:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:28.037 00:42:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:28.037 00:42:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:28.037 00:42:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:28.037 00:42:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.037 00:42:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.297 00:42:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:28.297 "name": "raid_bdev1", 00:28:28.297 "uuid": "85e53ea6-08ac-42c4-889c-48a98f4143d1", 00:28:28.297 "strip_size_kb": 64, 00:28:28.297 "state": "configuring", 00:28:28.297 "raid_level": "raid5f", 00:28:28.297 "superblock": true, 00:28:28.297 "num_base_bdevs": 4, 00:28:28.297 "num_base_bdevs_discovered": 1, 00:28:28.297 "num_base_bdevs_operational": 4, 00:28:28.297 "base_bdevs_list": [ 00:28:28.297 { 00:28:28.297 "name": "pt1", 00:28:28.297 "uuid": "82a4b5f7-2121-5298-898f-15eeaf2d1bb8", 00:28:28.297 "is_configured": true, 00:28:28.297 "data_offset": 2048, 00:28:28.297 "data_size": 63488 00:28:28.297 }, 00:28:28.297 { 00:28:28.297 "name": null, 00:28:28.297 "uuid": "f368a2f3-ac5b-5bd4-aefa-7947926ea6e0", 00:28:28.297 "is_configured": false, 00:28:28.297 "data_offset": 2048, 00:28:28.297 "data_size": 63488 00:28:28.297 }, 00:28:28.297 { 00:28:28.297 "name": null, 00:28:28.297 "uuid": "95bee972-420c-587e-9916-bdd1e9ed29c7", 00:28:28.297 "is_configured": false, 00:28:28.297 "data_offset": 2048, 00:28:28.297 "data_size": 63488 00:28:28.297 }, 00:28:28.297 { 00:28:28.297 "name": null, 00:28:28.297 "uuid": "9422085f-9a42-5b0e-946b-bb102e16346e", 00:28:28.297 "is_configured": false, 00:28:28.297 "data_offset": 2048, 00:28:28.297 "data_size": 63488 00:28:28.297 } 00:28:28.297 ] 00:28:28.297 }' 00:28:28.297 00:42:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:28.297 00:42:22 -- common/autotest_common.sh@10 -- # set +x 00:28:28.863 00:42:22 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:28:28.863 00:42:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:28.863 00:42:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:29.121 [2024-04-24 00:42:22.832354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:29.121 [2024-04-24 00:42:22.832772] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:29.121 [2024-04-24 00:42:22.832978] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:29.121 [2024-04-24 00:42:22.833143] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:29.121 [2024-04-24 00:42:22.833801] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:29.121 [2024-04-24 00:42:22.834028] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:29.121 [2024-04-24 00:42:22.834315] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:29.121 [2024-04-24 00:42:22.834450] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:29.121 pt2 00:28:29.121 00:42:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:28:29.121 00:42:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:29.121 00:42:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:29.379 [2024-04-24 00:42:23.048379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:29.379 [2024-04-24 00:42:23.048690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:29.379 [2024-04-24 00:42:23.048848] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:28:29.379 [2024-04-24 00:42:23.048954] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:29.379 [2024-04-24 00:42:23.049553] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:29.379 [2024-04-24 00:42:23.049735] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:29.379 [2024-04-24 00:42:23.049968] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:29.379 [2024-04-24 00:42:23.050058] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:29.379 pt3 00:28:29.380 00:42:23 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:28:29.380 00:42:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:29.380 00:42:23 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:29.654 [2024-04-24 00:42:23.332481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:29.655 [2024-04-24 00:42:23.332656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:29.655 [2024-04-24 00:42:23.332727] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:29.655 [2024-04-24 00:42:23.332914] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:29.655 [2024-04-24 00:42:23.333519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:29.655 [2024-04-24 00:42:23.333700] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:29.655 [2024-04-24 00:42:23.333938] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:28:29.655 [2024-04-24 00:42:23.334054] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:29.655 [2024-04-24 00:42:23.334280] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:28:29.655 [2024-04-24 00:42:23.334378] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:29.655 [2024-04-24 00:42:23.334519] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:29.655 [2024-04-24 00:42:23.342421] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:28:29.655 [2024-04-24 00:42:23.342632] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:28:29.655 [2024-04-24 00:42:23.343002] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:29.655 pt4 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.655 00:42:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.911 00:42:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:29.911 "name": "raid_bdev1", 00:28:29.911 "uuid": "85e53ea6-08ac-42c4-889c-48a98f4143d1", 00:28:29.911 "strip_size_kb": 64, 00:28:29.911 "state": "online", 00:28:29.911 "raid_level": "raid5f", 00:28:29.911 "superblock": true, 00:28:29.911 "num_base_bdevs": 4, 00:28:29.911 "num_base_bdevs_discovered": 4, 00:28:29.911 "num_base_bdevs_operational": 4, 00:28:29.911 "base_bdevs_list": [ 00:28:29.911 { 00:28:29.911 "name": "pt1", 00:28:29.911 "uuid": "82a4b5f7-2121-5298-898f-15eeaf2d1bb8", 00:28:29.912 "is_configured": true, 00:28:29.912 "data_offset": 2048, 00:28:29.912 "data_size": 63488 00:28:29.912 }, 00:28:29.912 { 00:28:29.912 "name": "pt2", 00:28:29.912 "uuid": "f368a2f3-ac5b-5bd4-aefa-7947926ea6e0", 00:28:29.912 "is_configured": true, 00:28:29.912 "data_offset": 2048, 00:28:29.912 "data_size": 63488 00:28:29.912 }, 00:28:29.912 { 00:28:29.912 "name": "pt3", 00:28:29.912 "uuid": "95bee972-420c-587e-9916-bdd1e9ed29c7", 00:28:29.912 "is_configured": true, 00:28:29.912 "data_offset": 2048, 00:28:29.912 "data_size": 63488 00:28:29.912 }, 00:28:29.912 { 00:28:29.912 "name": "pt4", 00:28:29.912 "uuid": "9422085f-9a42-5b0e-946b-bb102e16346e", 00:28:29.912 "is_configured": true, 00:28:29.912 "data_offset": 2048, 00:28:29.912 "data_size": 63488 00:28:29.912 } 00:28:29.912 ] 00:28:29.912 }' 00:28:29.912 00:42:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:29.912 00:42:23 -- common/autotest_common.sh@10 -- # set +x 00:28:30.498 00:42:24 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:30.498 00:42:24 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:28:30.756 [2024-04-24 00:42:24.449576] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:30.756 00:42:24 -- bdev/bdev_raid.sh@430 -- # '[' 85e53ea6-08ac-42c4-889c-48a98f4143d1 '!=' 85e53ea6-08ac-42c4-889c-48a98f4143d1 ']' 00:28:30.757 00:42:24 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:28:30.757 00:42:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:28:30.757 00:42:24 -- bdev/bdev_raid.sh@196 -- # return 0 00:28:30.757 00:42:24 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:31.014 [2024-04-24 00:42:24.757532] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:31.014 00:42:24 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:31.014 00:42:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:31.014 00:42:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:31.014 00:42:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:31.014 00:42:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:31.014 00:42:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:31.014 00:42:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:31.014 00:42:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:31.014 00:42:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:31.014 00:42:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:31.014 00:42:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.014 00:42:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.610 00:42:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:31.611 "name": "raid_bdev1", 00:28:31.611 "uuid": "85e53ea6-08ac-42c4-889c-48a98f4143d1", 00:28:31.611 "strip_size_kb": 64, 00:28:31.611 "state": "online", 00:28:31.611 "raid_level": "raid5f", 00:28:31.611 "superblock": true, 00:28:31.611 "num_base_bdevs": 4, 00:28:31.611 "num_base_bdevs_discovered": 3, 00:28:31.611 "num_base_bdevs_operational": 3, 00:28:31.611 "base_bdevs_list": [ 00:28:31.611 { 00:28:31.611 "name": null, 00:28:31.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.611 "is_configured": false, 00:28:31.611 "data_offset": 2048, 00:28:31.611 "data_size": 63488 00:28:31.611 }, 00:28:31.611 { 00:28:31.611 "name": "pt2", 00:28:31.611 "uuid": "f368a2f3-ac5b-5bd4-aefa-7947926ea6e0", 00:28:31.611 "is_configured": true, 00:28:31.611 "data_offset": 2048, 00:28:31.611 "data_size": 63488 00:28:31.611 }, 00:28:31.611 { 00:28:31.611 "name": "pt3", 00:28:31.611 "uuid": "95bee972-420c-587e-9916-bdd1e9ed29c7", 00:28:31.611 "is_configured": true, 00:28:31.611 "data_offset": 2048, 00:28:31.611 "data_size": 63488 00:28:31.611 }, 00:28:31.611 { 00:28:31.611 "name": "pt4", 00:28:31.611 "uuid": "9422085f-9a42-5b0e-946b-bb102e16346e", 00:28:31.611 "is_configured": true, 00:28:31.611 "data_offset": 2048, 00:28:31.611 "data_size": 63488 00:28:31.611 } 00:28:31.611 ] 00:28:31.611 }' 00:28:31.611 00:42:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:31.611 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:28:32.175 00:42:25 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:32.433 [2024-04-24 00:42:26.049767] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:32.433 [2024-04-24 00:42:26.050032] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:32.433 [2024-04-24 00:42:26.050212] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:32.433 [2024-04-24 00:42:26.050385] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:32.433 [2024-04-24 00:42:26.050531] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:28:32.433 00:42:26 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.433 00:42:26 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:28:32.691 00:42:26 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:28:32.691 00:42:26 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:28:32.691 00:42:26 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:28:32.691 00:42:26 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:32.691 00:42:26 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:32.948 00:42:26 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:28:32.948 00:42:26 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:32.948 00:42:26 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:33.205 00:42:26 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:28:33.205 00:42:26 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:33.205 00:42:26 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:28:33.474 00:42:27 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:28:33.474 00:42:27 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:28:33.474 00:42:27 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:28:33.474 00:42:27 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:28:33.474 00:42:27 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:33.732 [2024-04-24 00:42:27.490034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:33.732 [2024-04-24 00:42:27.490360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:33.732 [2024-04-24 00:42:27.490538] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:28:33.732 [2024-04-24 00:42:27.490672] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:33.732 [2024-04-24 00:42:27.493444] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:33.732 [2024-04-24 00:42:27.493679] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:33.732 [2024-04-24 00:42:27.493932] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:33.732 [2024-04-24 00:42:27.494080] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:33.732 pt2 00:28:33.732 00:42:27 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:33.732 00:42:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:33.732 00:42:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:33.732 00:42:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:33.732 00:42:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:33.732 00:42:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:33.732 00:42:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:33.732 00:42:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:33.732 00:42:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:33.732 00:42:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:33.732 00:42:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.732 00:42:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.990 00:42:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:33.990 "name": "raid_bdev1", 00:28:33.990 "uuid": "85e53ea6-08ac-42c4-889c-48a98f4143d1", 00:28:33.990 "strip_size_kb": 64, 00:28:33.990 "state": "configuring", 00:28:33.990 "raid_level": "raid5f", 00:28:33.990 "superblock": true, 00:28:33.990 "num_base_bdevs": 4, 00:28:33.990 "num_base_bdevs_discovered": 1, 00:28:33.990 "num_base_bdevs_operational": 3, 00:28:33.990 "base_bdevs_list": [ 00:28:33.990 { 00:28:33.990 "name": null, 00:28:33.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:33.990 "is_configured": false, 00:28:33.990 "data_offset": 2048, 00:28:33.990 "data_size": 63488 00:28:33.990 }, 00:28:33.990 { 00:28:33.990 "name": "pt2", 00:28:33.990 "uuid": "f368a2f3-ac5b-5bd4-aefa-7947926ea6e0", 00:28:33.990 "is_configured": true, 00:28:33.990 "data_offset": 2048, 00:28:33.990 "data_size": 63488 00:28:33.990 }, 00:28:33.990 { 00:28:33.990 "name": null, 00:28:33.990 "uuid": "95bee972-420c-587e-9916-bdd1e9ed29c7", 00:28:33.990 "is_configured": false, 00:28:33.990 "data_offset": 2048, 00:28:33.990 "data_size": 63488 00:28:33.990 }, 00:28:33.990 { 00:28:33.990 "name": null, 00:28:33.990 "uuid": "9422085f-9a42-5b0e-946b-bb102e16346e", 00:28:33.990 "is_configured": false, 00:28:33.990 "data_offset": 2048, 00:28:33.990 "data_size": 63488 00:28:33.990 } 00:28:33.990 ] 00:28:33.990 }' 00:28:33.990 00:42:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:33.990 00:42:27 -- common/autotest_common.sh@10 -- # set +x 00:28:34.983 00:42:28 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:28:34.983 00:42:28 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:28:34.983 00:42:28 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:34.983 [2024-04-24 00:42:28.707358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:34.983 [2024-04-24 00:42:28.707666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:34.983 [2024-04-24 00:42:28.707751] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:28:34.983 [2024-04-24 00:42:28.707864] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:34.983 [2024-04-24 00:42:28.708396] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:34.983 [2024-04-24 00:42:28.708568] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:34.983 [2024-04-24 00:42:28.708825] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:34.983 [2024-04-24 00:42:28.708941] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:34.983 pt3 00:28:35.259 00:42:28 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:35.259 00:42:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:35.259 00:42:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:35.259 00:42:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:35.259 00:42:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:35.259 00:42:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:35.259 00:42:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:35.259 00:42:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:35.259 00:42:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:35.259 00:42:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:35.259 00:42:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.259 00:42:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.259 00:42:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:35.259 "name": "raid_bdev1", 00:28:35.259 "uuid": "85e53ea6-08ac-42c4-889c-48a98f4143d1", 00:28:35.259 "strip_size_kb": 64, 00:28:35.259 "state": "configuring", 00:28:35.259 "raid_level": "raid5f", 00:28:35.259 "superblock": true, 00:28:35.259 "num_base_bdevs": 4, 00:28:35.259 "num_base_bdevs_discovered": 2, 00:28:35.259 "num_base_bdevs_operational": 3, 00:28:35.259 "base_bdevs_list": [ 00:28:35.259 { 00:28:35.259 "name": null, 00:28:35.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.259 "is_configured": false, 00:28:35.259 "data_offset": 2048, 00:28:35.259 "data_size": 63488 00:28:35.259 }, 00:28:35.259 { 00:28:35.259 "name": "pt2", 00:28:35.259 "uuid": "f368a2f3-ac5b-5bd4-aefa-7947926ea6e0", 00:28:35.259 "is_configured": true, 00:28:35.259 "data_offset": 2048, 00:28:35.259 "data_size": 63488 00:28:35.259 }, 00:28:35.259 { 00:28:35.259 "name": "pt3", 00:28:35.259 "uuid": "95bee972-420c-587e-9916-bdd1e9ed29c7", 00:28:35.259 "is_configured": true, 00:28:35.259 "data_offset": 2048, 00:28:35.259 "data_size": 63488 00:28:35.259 }, 00:28:35.259 { 00:28:35.259 "name": null, 00:28:35.259 "uuid": "9422085f-9a42-5b0e-946b-bb102e16346e", 00:28:35.259 "is_configured": false, 00:28:35.259 "data_offset": 2048, 00:28:35.259 "data_size": 63488 00:28:35.259 } 00:28:35.259 ] 00:28:35.259 }' 00:28:35.259 00:42:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:35.259 00:42:29 -- common/autotest_common.sh@10 -- # set +x 00:28:36.195 00:42:29 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:28:36.195 00:42:29 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:28:36.195 00:42:29 -- bdev/bdev_raid.sh@462 -- # i=3 00:28:36.196 00:42:29 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:36.196 [2024-04-24 00:42:29.955619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:36.196 [2024-04-24 00:42:29.955879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:36.196 [2024-04-24 00:42:29.955957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:28:36.196 [2024-04-24 00:42:29.956190] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:36.196 [2024-04-24 00:42:29.956716] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:36.196 [2024-04-24 00:42:29.956866] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:36.196 [2024-04-24 00:42:29.957113] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:28:36.196 [2024-04-24 00:42:29.957221] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:36.196 [2024-04-24 00:42:29.957395] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:28:36.196 [2024-04-24 00:42:29.957484] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:36.196 [2024-04-24 00:42:29.957675] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:28:36.196 [2024-04-24 00:42:29.964825] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:28:36.196 [2024-04-24 00:42:29.964942] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:28:36.196 [2024-04-24 00:42:29.965372] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:36.196 pt4 00:28:36.196 00:42:29 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:36.196 00:42:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:36.196 00:42:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:36.196 00:42:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:36.196 00:42:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:36.196 00:42:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:36.196 00:42:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:36.196 00:42:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:36.196 00:42:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:36.196 00:42:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:36.196 00:42:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.196 00:42:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.762 00:42:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:36.762 "name": "raid_bdev1", 00:28:36.762 "uuid": "85e53ea6-08ac-42c4-889c-48a98f4143d1", 00:28:36.762 "strip_size_kb": 64, 00:28:36.762 "state": "online", 00:28:36.762 "raid_level": "raid5f", 00:28:36.762 "superblock": true, 00:28:36.762 "num_base_bdevs": 4, 00:28:36.762 "num_base_bdevs_discovered": 3, 00:28:36.762 "num_base_bdevs_operational": 3, 00:28:36.762 "base_bdevs_list": [ 00:28:36.762 { 00:28:36.762 "name": null, 00:28:36.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.762 "is_configured": false, 00:28:36.762 "data_offset": 2048, 00:28:36.762 "data_size": 63488 00:28:36.762 }, 00:28:36.762 { 00:28:36.762 "name": "pt2", 00:28:36.762 "uuid": "f368a2f3-ac5b-5bd4-aefa-7947926ea6e0", 00:28:36.762 "is_configured": true, 00:28:36.762 "data_offset": 2048, 00:28:36.762 "data_size": 63488 00:28:36.762 }, 00:28:36.762 { 00:28:36.762 "name": "pt3", 00:28:36.762 "uuid": "95bee972-420c-587e-9916-bdd1e9ed29c7", 00:28:36.762 "is_configured": true, 00:28:36.762 "data_offset": 2048, 00:28:36.762 "data_size": 63488 00:28:36.762 }, 00:28:36.762 { 00:28:36.762 "name": "pt4", 00:28:36.762 "uuid": "9422085f-9a42-5b0e-946b-bb102e16346e", 00:28:36.762 "is_configured": true, 00:28:36.762 "data_offset": 2048, 00:28:36.762 "data_size": 63488 00:28:36.762 } 00:28:36.762 ] 00:28:36.762 }' 00:28:36.762 00:42:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:36.762 00:42:30 -- common/autotest_common.sh@10 -- # set +x 00:28:37.329 00:42:30 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:28:37.329 00:42:30 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:37.588 [2024-04-24 00:42:31.200378] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:37.588 [2024-04-24 00:42:31.200572] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:37.588 [2024-04-24 00:42:31.200719] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:37.588 [2024-04-24 00:42:31.200872] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:37.588 [2024-04-24 00:42:31.200966] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:28:37.588 00:42:31 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.588 00:42:31 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:28:37.846 00:42:31 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:28:37.846 00:42:31 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:28:37.846 00:42:31 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:37.846 [2024-04-24 00:42:31.612425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:37.846 [2024-04-24 00:42:31.612702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:37.846 [2024-04-24 00:42:31.612783] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:28:37.846 [2024-04-24 00:42:31.612901] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:37.846 [2024-04-24 00:42:31.615794] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:37.846 [2024-04-24 00:42:31.616009] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:37.846 [2024-04-24 00:42:31.616279] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:28:37.846 [2024-04-24 00:42:31.616435] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:37.846 pt1 00:28:37.846 00:42:31 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:28:37.846 00:42:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:37.846 00:42:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:37.846 00:42:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:37.846 00:42:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:37.846 00:42:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:37.846 00:42:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:37.846 00:42:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:37.846 00:42:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:37.846 00:42:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:37.847 00:42:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.847 00:42:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.413 00:42:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:38.413 "name": "raid_bdev1", 00:28:38.413 "uuid": "85e53ea6-08ac-42c4-889c-48a98f4143d1", 00:28:38.413 "strip_size_kb": 64, 00:28:38.413 "state": "configuring", 00:28:38.413 "raid_level": "raid5f", 00:28:38.413 "superblock": true, 00:28:38.413 "num_base_bdevs": 4, 00:28:38.413 "num_base_bdevs_discovered": 1, 00:28:38.413 "num_base_bdevs_operational": 4, 00:28:38.413 "base_bdevs_list": [ 00:28:38.414 { 00:28:38.414 "name": "pt1", 00:28:38.414 "uuid": "82a4b5f7-2121-5298-898f-15eeaf2d1bb8", 00:28:38.414 "is_configured": true, 00:28:38.414 "data_offset": 2048, 00:28:38.414 "data_size": 63488 00:28:38.414 }, 00:28:38.414 { 00:28:38.414 "name": null, 00:28:38.414 "uuid": "f368a2f3-ac5b-5bd4-aefa-7947926ea6e0", 00:28:38.414 "is_configured": false, 00:28:38.414 "data_offset": 2048, 00:28:38.414 "data_size": 63488 00:28:38.414 }, 00:28:38.414 { 00:28:38.414 "name": null, 00:28:38.414 "uuid": "95bee972-420c-587e-9916-bdd1e9ed29c7", 00:28:38.414 "is_configured": false, 00:28:38.414 "data_offset": 2048, 00:28:38.414 "data_size": 63488 00:28:38.414 }, 00:28:38.414 { 00:28:38.414 "name": null, 00:28:38.414 "uuid": "9422085f-9a42-5b0e-946b-bb102e16346e", 00:28:38.414 "is_configured": false, 00:28:38.414 "data_offset": 2048, 00:28:38.414 "data_size": 63488 00:28:38.414 } 00:28:38.414 ] 00:28:38.414 }' 00:28:38.414 00:42:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:38.414 00:42:31 -- common/autotest_common.sh@10 -- # set +x 00:28:38.979 00:42:32 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:28:38.979 00:42:32 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:38.979 00:42:32 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:38.979 00:42:32 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:28:38.979 00:42:32 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:38.979 00:42:32 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:28:39.237 00:42:32 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:28:39.237 00:42:32 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:39.238 00:42:32 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:28:39.497 00:42:33 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:28:39.497 00:42:33 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:28:39.497 00:42:33 -- bdev/bdev_raid.sh@489 -- # i=3 00:28:39.497 00:42:33 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:28:39.756 [2024-04-24 00:42:33.401193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:28:39.756 [2024-04-24 00:42:33.401484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:39.756 [2024-04-24 00:42:33.401567] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:28:39.756 [2024-04-24 00:42:33.401845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:39.756 [2024-04-24 00:42:33.402434] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:39.756 [2024-04-24 00:42:33.402615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:28:39.756 [2024-04-24 00:42:33.402910] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:28:39.756 [2024-04-24 00:42:33.403044] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:39.756 [2024-04-24 00:42:33.403141] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:39.756 [2024-04-24 00:42:33.403198] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state configuring 00:28:39.756 [2024-04-24 00:42:33.403483] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:28:39.756 pt4 00:28:39.756 00:42:33 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:28:39.756 00:42:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:39.756 00:42:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:28:39.756 00:42:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:39.756 00:42:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:39.756 00:42:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:39.756 00:42:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:39.756 00:42:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:39.756 00:42:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:39.756 00:42:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:39.756 00:42:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.756 00:42:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.015 00:42:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:40.015 "name": "raid_bdev1", 00:28:40.015 "uuid": "85e53ea6-08ac-42c4-889c-48a98f4143d1", 00:28:40.015 "strip_size_kb": 64, 00:28:40.015 "state": "configuring", 00:28:40.015 "raid_level": "raid5f", 00:28:40.015 "superblock": true, 00:28:40.015 "num_base_bdevs": 4, 00:28:40.015 "num_base_bdevs_discovered": 1, 00:28:40.015 "num_base_bdevs_operational": 3, 00:28:40.015 "base_bdevs_list": [ 00:28:40.015 { 00:28:40.015 "name": null, 00:28:40.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.015 "is_configured": false, 00:28:40.015 "data_offset": 2048, 00:28:40.015 "data_size": 63488 00:28:40.015 }, 00:28:40.015 { 00:28:40.015 "name": null, 00:28:40.015 "uuid": "f368a2f3-ac5b-5bd4-aefa-7947926ea6e0", 00:28:40.015 "is_configured": false, 00:28:40.015 "data_offset": 2048, 00:28:40.015 "data_size": 63488 00:28:40.015 }, 00:28:40.015 { 00:28:40.015 "name": null, 00:28:40.015 "uuid": "95bee972-420c-587e-9916-bdd1e9ed29c7", 00:28:40.015 "is_configured": false, 00:28:40.015 "data_offset": 2048, 00:28:40.015 "data_size": 63488 00:28:40.015 }, 00:28:40.015 { 00:28:40.015 "name": "pt4", 00:28:40.015 "uuid": "9422085f-9a42-5b0e-946b-bb102e16346e", 00:28:40.015 "is_configured": true, 00:28:40.015 "data_offset": 2048, 00:28:40.015 "data_size": 63488 00:28:40.015 } 00:28:40.015 ] 00:28:40.015 }' 00:28:40.015 00:42:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:40.015 00:42:33 -- common/autotest_common.sh@10 -- # set +x 00:28:40.583 00:42:34 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:28:40.583 00:42:34 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:28:40.583 00:42:34 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:40.842 [2024-04-24 00:42:34.445506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:40.842 [2024-04-24 00:42:34.445847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:40.842 [2024-04-24 00:42:34.446052] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:28:40.842 [2024-04-24 00:42:34.446194] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:40.842 [2024-04-24 00:42:34.446783] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:40.842 [2024-04-24 00:42:34.447001] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:40.842 [2024-04-24 00:42:34.447286] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:28:40.842 [2024-04-24 00:42:34.447442] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:40.842 pt2 00:28:40.842 00:42:34 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:28:40.842 00:42:34 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:28:40.842 00:42:34 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:41.100 [2024-04-24 00:42:34.721571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:41.100 [2024-04-24 00:42:34.721843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:41.100 [2024-04-24 00:42:34.721931] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:28:41.100 [2024-04-24 00:42:34.722067] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:41.100 [2024-04-24 00:42:34.722655] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:41.100 [2024-04-24 00:42:34.722860] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:41.100 [2024-04-24 00:42:34.723153] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:28:41.100 [2024-04-24 00:42:34.723322] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:41.100 [2024-04-24 00:42:34.723537] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:28:41.100 [2024-04-24 00:42:34.723645] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:41.100 [2024-04-24 00:42:34.723786] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:28:41.100 [2024-04-24 00:42:34.733543] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:28:41.100 [2024-04-24 00:42:34.733719] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011f80 00:28:41.100 [2024-04-24 00:42:34.734148] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:41.100 pt3 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.100 00:42:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.358 00:42:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:41.358 "name": "raid_bdev1", 00:28:41.358 "uuid": "85e53ea6-08ac-42c4-889c-48a98f4143d1", 00:28:41.358 "strip_size_kb": 64, 00:28:41.358 "state": "online", 00:28:41.358 "raid_level": "raid5f", 00:28:41.358 "superblock": true, 00:28:41.358 "num_base_bdevs": 4, 00:28:41.358 "num_base_bdevs_discovered": 3, 00:28:41.358 "num_base_bdevs_operational": 3, 00:28:41.358 "base_bdevs_list": [ 00:28:41.358 { 00:28:41.358 "name": null, 00:28:41.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.358 "is_configured": false, 00:28:41.358 "data_offset": 2048, 00:28:41.358 "data_size": 63488 00:28:41.358 }, 00:28:41.358 { 00:28:41.358 "name": "pt2", 00:28:41.358 "uuid": "f368a2f3-ac5b-5bd4-aefa-7947926ea6e0", 00:28:41.358 "is_configured": true, 00:28:41.358 "data_offset": 2048, 00:28:41.358 "data_size": 63488 00:28:41.358 }, 00:28:41.358 { 00:28:41.358 "name": "pt3", 00:28:41.358 "uuid": "95bee972-420c-587e-9916-bdd1e9ed29c7", 00:28:41.358 "is_configured": true, 00:28:41.358 "data_offset": 2048, 00:28:41.358 "data_size": 63488 00:28:41.358 }, 00:28:41.358 { 00:28:41.358 "name": "pt4", 00:28:41.358 "uuid": "9422085f-9a42-5b0e-946b-bb102e16346e", 00:28:41.358 "is_configured": true, 00:28:41.358 "data_offset": 2048, 00:28:41.358 "data_size": 63488 00:28:41.358 } 00:28:41.358 ] 00:28:41.358 }' 00:28:41.358 00:42:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:41.358 00:42:34 -- common/autotest_common.sh@10 -- # set +x 00:28:41.925 00:42:35 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:41.925 00:42:35 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:28:42.183 [2024-04-24 00:42:35.882745] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:42.183 00:42:35 -- bdev/bdev_raid.sh@506 -- # '[' 85e53ea6-08ac-42c4-889c-48a98f4143d1 '!=' 85e53ea6-08ac-42c4-889c-48a98f4143d1 ']' 00:28:42.183 00:42:35 -- bdev/bdev_raid.sh@511 -- # killprocess 139593 00:28:42.183 00:42:35 -- common/autotest_common.sh@936 -- # '[' -z 139593 ']' 00:28:42.183 00:42:35 -- common/autotest_common.sh@940 -- # kill -0 139593 00:28:42.183 00:42:35 -- common/autotest_common.sh@941 -- # uname 00:28:42.183 00:42:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:42.183 00:42:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139593 00:28:42.183 00:42:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:42.183 00:42:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:42.183 00:42:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139593' 00:28:42.183 killing process with pid 139593 00:28:42.183 00:42:35 -- common/autotest_common.sh@955 -- # kill 139593 00:28:42.183 00:42:35 -- common/autotest_common.sh@960 -- # wait 139593 00:28:42.183 [2024-04-24 00:42:35.931705] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:42.183 [2024-04-24 00:42:35.931805] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:42.183 [2024-04-24 00:42:35.931891] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:42.183 [2024-04-24 00:42:35.932065] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state offline 00:28:42.749 [2024-04-24 00:42:36.365818] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:44.124 ************************************ 00:28:44.124 END TEST raid5f_superblock_test 00:28:44.124 ************************************ 00:28:44.124 00:42:37 -- bdev/bdev_raid.sh@513 -- # return 0 00:28:44.124 00:28:44.124 real 0m25.401s 00:28:44.124 user 0m45.679s 00:28:44.124 sys 0m3.353s 00:28:44.124 00:42:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:44.124 00:42:37 -- common/autotest_common.sh@10 -- # set +x 00:28:44.124 00:42:37 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:28:44.124 00:42:37 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:28:44.124 00:42:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:28:44.124 00:42:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:44.124 00:42:37 -- common/autotest_common.sh@10 -- # set +x 00:28:44.382 ************************************ 00:28:44.382 START TEST raid5f_rebuild_test 00:28:44.382 ************************************ 00:28:44.382 00:42:37 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 4 false false 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@544 -- # raid_pid=140304 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:44.382 00:42:37 -- bdev/bdev_raid.sh@545 -- # waitforlisten 140304 /var/tmp/spdk-raid.sock 00:28:44.382 00:42:37 -- common/autotest_common.sh@817 -- # '[' -z 140304 ']' 00:28:44.382 00:42:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:44.382 00:42:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:44.382 00:42:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:44.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:44.382 00:42:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:44.382 00:42:37 -- common/autotest_common.sh@10 -- # set +x 00:28:44.382 [2024-04-24 00:42:38.015763] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:28:44.382 [2024-04-24 00:42:38.016165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140304 ] 00:28:44.382 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:44.382 Zero copy mechanism will not be used. 00:28:44.641 [2024-04-24 00:42:38.223174] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.910 [2024-04-24 00:42:38.519485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.169 [2024-04-24 00:42:38.779853] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:45.169 00:42:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:45.169 00:42:38 -- common/autotest_common.sh@850 -- # return 0 00:28:45.169 00:42:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:45.169 00:42:38 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:28:45.169 00:42:38 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:28:45.428 BaseBdev1 00:28:45.687 00:42:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:45.687 00:42:39 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:28:45.687 00:42:39 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:28:45.945 BaseBdev2 00:28:45.945 00:42:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:45.945 00:42:39 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:28:45.945 00:42:39 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:28:46.203 BaseBdev3 00:28:46.203 00:42:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:28:46.203 00:42:39 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:28:46.203 00:42:39 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:28:46.461 BaseBdev4 00:28:46.461 00:42:40 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:47.027 spare_malloc 00:28:47.027 00:42:40 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:47.285 spare_delay 00:28:47.285 00:42:40 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:47.285 [2024-04-24 00:42:41.075539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:47.285 [2024-04-24 00:42:41.075881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.285 [2024-04-24 00:42:41.076040] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:28:47.285 [2024-04-24 00:42:41.076188] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.544 [2024-04-24 00:42:41.078896] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.544 [2024-04-24 00:42:41.079004] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:47.544 spare 00:28:47.544 00:42:41 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:47.544 [2024-04-24 00:42:41.303822] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:47.544 [2024-04-24 00:42:41.306295] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:47.544 [2024-04-24 00:42:41.306535] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:47.544 [2024-04-24 00:42:41.306611] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:47.544 [2024-04-24 00:42:41.306782] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:28:47.544 [2024-04-24 00:42:41.306885] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:28:47.544 [2024-04-24 00:42:41.307131] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:28:47.544 [2024-04-24 00:42:41.315945] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:28:47.544 [2024-04-24 00:42:41.316141] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:28:47.544 [2024-04-24 00:42:41.316494] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:47.544 00:42:41 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:28:47.544 00:42:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:47.544 00:42:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:47.544 00:42:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:47.544 00:42:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:47.544 00:42:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:28:47.544 00:42:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:47.544 00:42:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:47.544 00:42:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:47.544 00:42:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:47.803 00:42:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.803 00:42:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.803 00:42:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:47.803 "name": "raid_bdev1", 00:28:47.803 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:28:47.803 "strip_size_kb": 64, 00:28:47.803 "state": "online", 00:28:47.803 "raid_level": "raid5f", 00:28:47.803 "superblock": false, 00:28:47.803 "num_base_bdevs": 4, 00:28:47.803 "num_base_bdevs_discovered": 4, 00:28:47.803 "num_base_bdevs_operational": 4, 00:28:47.803 "base_bdevs_list": [ 00:28:47.803 { 00:28:47.803 "name": "BaseBdev1", 00:28:47.803 "uuid": "d4a34664-c0af-4199-980f-65d778ebfff1", 00:28:47.803 "is_configured": true, 00:28:47.803 "data_offset": 0, 00:28:47.803 "data_size": 65536 00:28:47.803 }, 00:28:47.803 { 00:28:47.803 "name": "BaseBdev2", 00:28:47.803 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:28:47.803 "is_configured": true, 00:28:47.803 "data_offset": 0, 00:28:47.803 "data_size": 65536 00:28:47.803 }, 00:28:47.803 { 00:28:47.803 "name": "BaseBdev3", 00:28:47.803 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:28:47.803 "is_configured": true, 00:28:47.803 "data_offset": 0, 00:28:47.803 "data_size": 65536 00:28:47.803 }, 00:28:47.803 { 00:28:47.803 "name": "BaseBdev4", 00:28:47.803 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:28:47.803 "is_configured": true, 00:28:47.803 "data_offset": 0, 00:28:47.803 "data_size": 65536 00:28:47.803 } 00:28:47.803 ] 00:28:47.803 }' 00:28:47.803 00:42:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:47.803 00:42:41 -- common/autotest_common.sh@10 -- # set +x 00:28:48.370 00:42:42 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:48.370 00:42:42 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:28:48.628 [2024-04-24 00:42:42.278553] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:48.628 00:42:42 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:28:48.628 00:42:42 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.628 00:42:42 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:48.886 00:42:42 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:28:48.886 00:42:42 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:28:48.886 00:42:42 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:28:48.886 00:42:42 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:48.886 00:42:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:48.886 00:42:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:48.886 00:42:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:48.886 00:42:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:48.886 00:42:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:48.886 00:42:42 -- bdev/nbd_common.sh@12 -- # local i 00:28:48.886 00:42:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:48.886 00:42:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:48.886 00:42:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:49.144 [2024-04-24 00:42:42.710465] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:28:49.144 /dev/nbd0 00:28:49.144 00:42:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:49.144 00:42:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:49.144 00:42:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:28:49.144 00:42:42 -- common/autotest_common.sh@855 -- # local i 00:28:49.144 00:42:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:28:49.144 00:42:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:28:49.144 00:42:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:28:49.144 00:42:42 -- common/autotest_common.sh@859 -- # break 00:28:49.144 00:42:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:28:49.144 00:42:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:28:49.144 00:42:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:49.144 1+0 records in 00:28:49.144 1+0 records out 00:28:49.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499688 s, 8.2 MB/s 00:28:49.144 00:42:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:49.144 00:42:42 -- common/autotest_common.sh@872 -- # size=4096 00:28:49.144 00:42:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:49.144 00:42:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:28:49.144 00:42:42 -- common/autotest_common.sh@875 -- # return 0 00:28:49.144 00:42:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:49.144 00:42:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:49.144 00:42:42 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:28:49.144 00:42:42 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:28:49.144 00:42:42 -- bdev/bdev_raid.sh@582 -- # echo 192 00:28:49.144 00:42:42 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:28:49.711 512+0 records in 00:28:49.711 512+0 records out 00:28:49.711 100663296 bytes (101 MB, 96 MiB) copied, 0.65734 s, 153 MB/s 00:28:49.711 00:42:43 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:49.711 00:42:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:49.711 00:42:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:49.711 00:42:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:49.711 00:42:43 -- bdev/nbd_common.sh@51 -- # local i 00:28:49.711 00:42:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:49.711 00:42:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:49.970 00:42:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:49.970 [2024-04-24 00:42:43.673502] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:49.970 00:42:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:49.970 00:42:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:49.970 00:42:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:49.970 00:42:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:49.970 00:42:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:49.970 00:42:43 -- bdev/nbd_common.sh@41 -- # break 00:28:49.970 00:42:43 -- bdev/nbd_common.sh@45 -- # return 0 00:28:49.970 00:42:43 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:50.228 [2024-04-24 00:42:43.950126] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:50.228 00:42:43 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:50.228 00:42:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:50.228 00:42:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:50.228 00:42:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:50.228 00:42:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:50.228 00:42:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:50.228 00:42:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:50.228 00:42:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:50.228 00:42:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:50.228 00:42:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:50.228 00:42:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.228 00:42:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.486 00:42:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:50.486 "name": "raid_bdev1", 00:28:50.486 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:28:50.486 "strip_size_kb": 64, 00:28:50.486 "state": "online", 00:28:50.486 "raid_level": "raid5f", 00:28:50.486 "superblock": false, 00:28:50.486 "num_base_bdevs": 4, 00:28:50.486 "num_base_bdevs_discovered": 3, 00:28:50.486 "num_base_bdevs_operational": 3, 00:28:50.486 "base_bdevs_list": [ 00:28:50.486 { 00:28:50.486 "name": null, 00:28:50.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.486 "is_configured": false, 00:28:50.486 "data_offset": 0, 00:28:50.486 "data_size": 65536 00:28:50.486 }, 00:28:50.486 { 00:28:50.486 "name": "BaseBdev2", 00:28:50.486 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:28:50.486 "is_configured": true, 00:28:50.486 "data_offset": 0, 00:28:50.486 "data_size": 65536 00:28:50.486 }, 00:28:50.486 { 00:28:50.486 "name": "BaseBdev3", 00:28:50.486 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:28:50.486 "is_configured": true, 00:28:50.486 "data_offset": 0, 00:28:50.486 "data_size": 65536 00:28:50.486 }, 00:28:50.486 { 00:28:50.486 "name": "BaseBdev4", 00:28:50.486 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:28:50.486 "is_configured": true, 00:28:50.486 "data_offset": 0, 00:28:50.486 "data_size": 65536 00:28:50.486 } 00:28:50.486 ] 00:28:50.486 }' 00:28:50.486 00:42:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:50.486 00:42:44 -- common/autotest_common.sh@10 -- # set +x 00:28:51.418 00:42:44 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:51.418 [2024-04-24 00:42:45.159397] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:28:51.418 [2024-04-24 00:42:45.159719] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:51.418 [2024-04-24 00:42:45.179695] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:28:51.418 [2024-04-24 00:42:45.192099] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:51.418 00:42:45 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:28:52.793 00:42:46 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.793 00:42:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:52.793 00:42:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:52.793 00:42:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:52.793 00:42:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:52.793 00:42:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.793 00:42:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.793 00:42:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:52.793 "name": "raid_bdev1", 00:28:52.793 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:28:52.793 "strip_size_kb": 64, 00:28:52.793 "state": "online", 00:28:52.793 "raid_level": "raid5f", 00:28:52.793 "superblock": false, 00:28:52.793 "num_base_bdevs": 4, 00:28:52.793 "num_base_bdevs_discovered": 4, 00:28:52.793 "num_base_bdevs_operational": 4, 00:28:52.793 "process": { 00:28:52.793 "type": "rebuild", 00:28:52.793 "target": "spare", 00:28:52.793 "progress": { 00:28:52.793 "blocks": 23040, 00:28:52.793 "percent": 11 00:28:52.793 } 00:28:52.793 }, 00:28:52.793 "base_bdevs_list": [ 00:28:52.793 { 00:28:52.793 "name": "spare", 00:28:52.793 "uuid": "0d3c9867-aad0-54fd-b8e8-2c7d50bd8659", 00:28:52.793 "is_configured": true, 00:28:52.793 "data_offset": 0, 00:28:52.793 "data_size": 65536 00:28:52.793 }, 00:28:52.793 { 00:28:52.793 "name": "BaseBdev2", 00:28:52.793 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:28:52.793 "is_configured": true, 00:28:52.793 "data_offset": 0, 00:28:52.793 "data_size": 65536 00:28:52.793 }, 00:28:52.793 { 00:28:52.793 "name": "BaseBdev3", 00:28:52.793 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:28:52.793 "is_configured": true, 00:28:52.793 "data_offset": 0, 00:28:52.793 "data_size": 65536 00:28:52.793 }, 00:28:52.793 { 00:28:52.793 "name": "BaseBdev4", 00:28:52.793 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:28:52.793 "is_configured": true, 00:28:52.793 "data_offset": 0, 00:28:52.793 "data_size": 65536 00:28:52.793 } 00:28:52.793 ] 00:28:52.793 }' 00:28:52.793 00:42:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:52.793 00:42:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:52.793 00:42:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:52.793 00:42:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:52.793 00:42:46 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:53.050 [2024-04-24 00:42:46.801829] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:53.050 [2024-04-24 00:42:46.806577] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:53.050 [2024-04-24 00:42:46.807013] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:53.308 00:42:46 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:28:53.308 00:42:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:28:53.308 00:42:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:28:53.308 00:42:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:28:53.308 00:42:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:28:53.308 00:42:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:28:53.308 00:42:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:28:53.308 00:42:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:28:53.308 00:42:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:28:53.308 00:42:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:28:53.308 00:42:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.308 00:42:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.566 00:42:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:28:53.566 "name": "raid_bdev1", 00:28:53.566 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:28:53.566 "strip_size_kb": 64, 00:28:53.566 "state": "online", 00:28:53.566 "raid_level": "raid5f", 00:28:53.566 "superblock": false, 00:28:53.566 "num_base_bdevs": 4, 00:28:53.566 "num_base_bdevs_discovered": 3, 00:28:53.566 "num_base_bdevs_operational": 3, 00:28:53.566 "base_bdevs_list": [ 00:28:53.566 { 00:28:53.566 "name": null, 00:28:53.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.566 "is_configured": false, 00:28:53.566 "data_offset": 0, 00:28:53.566 "data_size": 65536 00:28:53.566 }, 00:28:53.566 { 00:28:53.566 "name": "BaseBdev2", 00:28:53.566 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:28:53.566 "is_configured": true, 00:28:53.566 "data_offset": 0, 00:28:53.566 "data_size": 65536 00:28:53.566 }, 00:28:53.566 { 00:28:53.566 "name": "BaseBdev3", 00:28:53.566 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:28:53.566 "is_configured": true, 00:28:53.566 "data_offset": 0, 00:28:53.566 "data_size": 65536 00:28:53.566 }, 00:28:53.566 { 00:28:53.566 "name": "BaseBdev4", 00:28:53.566 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:28:53.566 "is_configured": true, 00:28:53.566 "data_offset": 0, 00:28:53.566 "data_size": 65536 00:28:53.566 } 00:28:53.566 ] 00:28:53.566 }' 00:28:53.566 00:42:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:28:53.566 00:42:47 -- common/autotest_common.sh@10 -- # set +x 00:28:54.133 00:42:47 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:54.133 00:42:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:54.133 00:42:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:28:54.133 00:42:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:28:54.133 00:42:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:54.133 00:42:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.133 00:42:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.393 00:42:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:54.393 "name": "raid_bdev1", 00:28:54.393 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:28:54.393 "strip_size_kb": 64, 00:28:54.393 "state": "online", 00:28:54.393 "raid_level": "raid5f", 00:28:54.393 "superblock": false, 00:28:54.393 "num_base_bdevs": 4, 00:28:54.393 "num_base_bdevs_discovered": 3, 00:28:54.393 "num_base_bdevs_operational": 3, 00:28:54.393 "base_bdevs_list": [ 00:28:54.393 { 00:28:54.393 "name": null, 00:28:54.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:54.393 "is_configured": false, 00:28:54.393 "data_offset": 0, 00:28:54.393 "data_size": 65536 00:28:54.393 }, 00:28:54.393 { 00:28:54.393 "name": "BaseBdev2", 00:28:54.393 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:28:54.393 "is_configured": true, 00:28:54.393 "data_offset": 0, 00:28:54.393 "data_size": 65536 00:28:54.393 }, 00:28:54.393 { 00:28:54.393 "name": "BaseBdev3", 00:28:54.393 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:28:54.393 "is_configured": true, 00:28:54.393 "data_offset": 0, 00:28:54.393 "data_size": 65536 00:28:54.393 }, 00:28:54.393 { 00:28:54.393 "name": "BaseBdev4", 00:28:54.393 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:28:54.393 "is_configured": true, 00:28:54.393 "data_offset": 0, 00:28:54.393 "data_size": 65536 00:28:54.393 } 00:28:54.393 ] 00:28:54.393 }' 00:28:54.393 00:42:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:54.393 00:42:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:54.393 00:42:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:54.393 00:42:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:28:54.393 00:42:48 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:54.651 [2024-04-24 00:42:48.221083] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:28:54.651 [2024-04-24 00:42:48.221351] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:54.651 [2024-04-24 00:42:48.238496] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:28:54.651 [2024-04-24 00:42:48.250442] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:54.651 00:42:48 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:28:55.646 00:42:49 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:55.646 00:42:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:55.646 00:42:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:55.646 00:42:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:55.646 00:42:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:55.646 00:42:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.646 00:42:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:55.905 "name": "raid_bdev1", 00:28:55.905 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:28:55.905 "strip_size_kb": 64, 00:28:55.905 "state": "online", 00:28:55.905 "raid_level": "raid5f", 00:28:55.905 "superblock": false, 00:28:55.905 "num_base_bdevs": 4, 00:28:55.905 "num_base_bdevs_discovered": 4, 00:28:55.905 "num_base_bdevs_operational": 4, 00:28:55.905 "process": { 00:28:55.905 "type": "rebuild", 00:28:55.905 "target": "spare", 00:28:55.905 "progress": { 00:28:55.905 "blocks": 23040, 00:28:55.905 "percent": 11 00:28:55.905 } 00:28:55.905 }, 00:28:55.905 "base_bdevs_list": [ 00:28:55.905 { 00:28:55.905 "name": "spare", 00:28:55.905 "uuid": "0d3c9867-aad0-54fd-b8e8-2c7d50bd8659", 00:28:55.905 "is_configured": true, 00:28:55.905 "data_offset": 0, 00:28:55.905 "data_size": 65536 00:28:55.905 }, 00:28:55.905 { 00:28:55.905 "name": "BaseBdev2", 00:28:55.905 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:28:55.905 "is_configured": true, 00:28:55.905 "data_offset": 0, 00:28:55.905 "data_size": 65536 00:28:55.905 }, 00:28:55.905 { 00:28:55.905 "name": "BaseBdev3", 00:28:55.905 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:28:55.905 "is_configured": true, 00:28:55.905 "data_offset": 0, 00:28:55.905 "data_size": 65536 00:28:55.905 }, 00:28:55.905 { 00:28:55.905 "name": "BaseBdev4", 00:28:55.905 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:28:55.905 "is_configured": true, 00:28:55.905 "data_offset": 0, 00:28:55.905 "data_size": 65536 00:28:55.905 } 00:28:55.905 ] 00:28:55.905 }' 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@657 -- # local timeout=792 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.905 00:42:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.164 00:42:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:56.164 "name": "raid_bdev1", 00:28:56.164 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:28:56.164 "strip_size_kb": 64, 00:28:56.164 "state": "online", 00:28:56.164 "raid_level": "raid5f", 00:28:56.164 "superblock": false, 00:28:56.164 "num_base_bdevs": 4, 00:28:56.164 "num_base_bdevs_discovered": 4, 00:28:56.164 "num_base_bdevs_operational": 4, 00:28:56.164 "process": { 00:28:56.164 "type": "rebuild", 00:28:56.164 "target": "spare", 00:28:56.164 "progress": { 00:28:56.164 "blocks": 28800, 00:28:56.164 "percent": 14 00:28:56.164 } 00:28:56.164 }, 00:28:56.164 "base_bdevs_list": [ 00:28:56.164 { 00:28:56.164 "name": "spare", 00:28:56.164 "uuid": "0d3c9867-aad0-54fd-b8e8-2c7d50bd8659", 00:28:56.164 "is_configured": true, 00:28:56.164 "data_offset": 0, 00:28:56.164 "data_size": 65536 00:28:56.164 }, 00:28:56.164 { 00:28:56.164 "name": "BaseBdev2", 00:28:56.164 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:28:56.164 "is_configured": true, 00:28:56.164 "data_offset": 0, 00:28:56.164 "data_size": 65536 00:28:56.164 }, 00:28:56.164 { 00:28:56.164 "name": "BaseBdev3", 00:28:56.164 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:28:56.164 "is_configured": true, 00:28:56.164 "data_offset": 0, 00:28:56.164 "data_size": 65536 00:28:56.164 }, 00:28:56.164 { 00:28:56.164 "name": "BaseBdev4", 00:28:56.164 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:28:56.164 "is_configured": true, 00:28:56.164 "data_offset": 0, 00:28:56.164 "data_size": 65536 00:28:56.164 } 00:28:56.164 ] 00:28:56.164 }' 00:28:56.164 00:42:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:56.164 00:42:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:56.164 00:42:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:56.164 00:42:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:56.164 00:42:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:57.539 00:42:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:57.539 00:42:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:57.539 00:42:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:57.539 00:42:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:57.539 00:42:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:57.539 00:42:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:57.539 00:42:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.539 00:42:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.539 00:42:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:57.539 "name": "raid_bdev1", 00:28:57.539 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:28:57.539 "strip_size_kb": 64, 00:28:57.539 "state": "online", 00:28:57.539 "raid_level": "raid5f", 00:28:57.539 "superblock": false, 00:28:57.539 "num_base_bdevs": 4, 00:28:57.539 "num_base_bdevs_discovered": 4, 00:28:57.539 "num_base_bdevs_operational": 4, 00:28:57.539 "process": { 00:28:57.539 "type": "rebuild", 00:28:57.539 "target": "spare", 00:28:57.539 "progress": { 00:28:57.539 "blocks": 55680, 00:28:57.539 "percent": 28 00:28:57.539 } 00:28:57.539 }, 00:28:57.539 "base_bdevs_list": [ 00:28:57.539 { 00:28:57.539 "name": "spare", 00:28:57.539 "uuid": "0d3c9867-aad0-54fd-b8e8-2c7d50bd8659", 00:28:57.539 "is_configured": true, 00:28:57.539 "data_offset": 0, 00:28:57.539 "data_size": 65536 00:28:57.539 }, 00:28:57.539 { 00:28:57.539 "name": "BaseBdev2", 00:28:57.539 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:28:57.539 "is_configured": true, 00:28:57.539 "data_offset": 0, 00:28:57.539 "data_size": 65536 00:28:57.539 }, 00:28:57.539 { 00:28:57.539 "name": "BaseBdev3", 00:28:57.539 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:28:57.539 "is_configured": true, 00:28:57.539 "data_offset": 0, 00:28:57.539 "data_size": 65536 00:28:57.539 }, 00:28:57.539 { 00:28:57.539 "name": "BaseBdev4", 00:28:57.539 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:28:57.539 "is_configured": true, 00:28:57.539 "data_offset": 0, 00:28:57.539 "data_size": 65536 00:28:57.539 } 00:28:57.539 ] 00:28:57.539 }' 00:28:57.539 00:42:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:57.539 00:42:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:57.539 00:42:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:57.798 00:42:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:57.798 00:42:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:58.734 00:42:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:28:58.734 00:42:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:58.734 00:42:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:28:58.734 00:42:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:28:58.734 00:42:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:28:58.734 00:42:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:28:58.734 00:42:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:58.734 00:42:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.002 00:42:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:28:59.002 "name": "raid_bdev1", 00:28:59.002 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:28:59.002 "strip_size_kb": 64, 00:28:59.002 "state": "online", 00:28:59.002 "raid_level": "raid5f", 00:28:59.003 "superblock": false, 00:28:59.003 "num_base_bdevs": 4, 00:28:59.003 "num_base_bdevs_discovered": 4, 00:28:59.003 "num_base_bdevs_operational": 4, 00:28:59.003 "process": { 00:28:59.003 "type": "rebuild", 00:28:59.003 "target": "spare", 00:28:59.003 "progress": { 00:28:59.003 "blocks": 82560, 00:28:59.003 "percent": 41 00:28:59.003 } 00:28:59.003 }, 00:28:59.003 "base_bdevs_list": [ 00:28:59.003 { 00:28:59.003 "name": "spare", 00:28:59.003 "uuid": "0d3c9867-aad0-54fd-b8e8-2c7d50bd8659", 00:28:59.003 "is_configured": true, 00:28:59.003 "data_offset": 0, 00:28:59.003 "data_size": 65536 00:28:59.003 }, 00:28:59.003 { 00:28:59.003 "name": "BaseBdev2", 00:28:59.003 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:28:59.003 "is_configured": true, 00:28:59.003 "data_offset": 0, 00:28:59.003 "data_size": 65536 00:28:59.003 }, 00:28:59.003 { 00:28:59.003 "name": "BaseBdev3", 00:28:59.003 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:28:59.003 "is_configured": true, 00:28:59.003 "data_offset": 0, 00:28:59.003 "data_size": 65536 00:28:59.003 }, 00:28:59.003 { 00:28:59.003 "name": "BaseBdev4", 00:28:59.003 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:28:59.003 "is_configured": true, 00:28:59.003 "data_offset": 0, 00:28:59.003 "data_size": 65536 00:28:59.003 } 00:28:59.003 ] 00:28:59.003 }' 00:28:59.003 00:42:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:28:59.003 00:42:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:59.003 00:42:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:28:59.262 00:42:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:28:59.262 00:42:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:00.196 00:42:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:00.196 00:42:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:00.196 00:42:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:00.196 00:42:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:00.196 00:42:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:00.196 00:42:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:00.196 00:42:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.196 00:42:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.455 00:42:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:00.455 "name": "raid_bdev1", 00:29:00.455 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:29:00.455 "strip_size_kb": 64, 00:29:00.455 "state": "online", 00:29:00.455 "raid_level": "raid5f", 00:29:00.455 "superblock": false, 00:29:00.455 "num_base_bdevs": 4, 00:29:00.455 "num_base_bdevs_discovered": 4, 00:29:00.455 "num_base_bdevs_operational": 4, 00:29:00.455 "process": { 00:29:00.455 "type": "rebuild", 00:29:00.455 "target": "spare", 00:29:00.455 "progress": { 00:29:00.455 "blocks": 107520, 00:29:00.455 "percent": 54 00:29:00.455 } 00:29:00.455 }, 00:29:00.455 "base_bdevs_list": [ 00:29:00.455 { 00:29:00.455 "name": "spare", 00:29:00.455 "uuid": "0d3c9867-aad0-54fd-b8e8-2c7d50bd8659", 00:29:00.455 "is_configured": true, 00:29:00.455 "data_offset": 0, 00:29:00.455 "data_size": 65536 00:29:00.455 }, 00:29:00.455 { 00:29:00.455 "name": "BaseBdev2", 00:29:00.455 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:29:00.455 "is_configured": true, 00:29:00.455 "data_offset": 0, 00:29:00.455 "data_size": 65536 00:29:00.455 }, 00:29:00.455 { 00:29:00.455 "name": "BaseBdev3", 00:29:00.455 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:29:00.455 "is_configured": true, 00:29:00.455 "data_offset": 0, 00:29:00.455 "data_size": 65536 00:29:00.455 }, 00:29:00.455 { 00:29:00.455 "name": "BaseBdev4", 00:29:00.455 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:29:00.455 "is_configured": true, 00:29:00.455 "data_offset": 0, 00:29:00.455 "data_size": 65536 00:29:00.455 } 00:29:00.455 ] 00:29:00.455 }' 00:29:00.455 00:42:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:00.455 00:42:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:00.455 00:42:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:00.455 00:42:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:00.455 00:42:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:01.391 00:42:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:01.391 00:42:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:01.391 00:42:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:01.391 00:42:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:01.391 00:42:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:01.391 00:42:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:01.391 00:42:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.391 00:42:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.649 00:42:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:01.649 "name": "raid_bdev1", 00:29:01.649 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:29:01.649 "strip_size_kb": 64, 00:29:01.649 "state": "online", 00:29:01.649 "raid_level": "raid5f", 00:29:01.649 "superblock": false, 00:29:01.649 "num_base_bdevs": 4, 00:29:01.649 "num_base_bdevs_discovered": 4, 00:29:01.649 "num_base_bdevs_operational": 4, 00:29:01.649 "process": { 00:29:01.649 "type": "rebuild", 00:29:01.649 "target": "spare", 00:29:01.649 "progress": { 00:29:01.649 "blocks": 134400, 00:29:01.649 "percent": 68 00:29:01.649 } 00:29:01.649 }, 00:29:01.649 "base_bdevs_list": [ 00:29:01.649 { 00:29:01.649 "name": "spare", 00:29:01.649 "uuid": "0d3c9867-aad0-54fd-b8e8-2c7d50bd8659", 00:29:01.649 "is_configured": true, 00:29:01.649 "data_offset": 0, 00:29:01.649 "data_size": 65536 00:29:01.649 }, 00:29:01.649 { 00:29:01.649 "name": "BaseBdev2", 00:29:01.649 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:29:01.649 "is_configured": true, 00:29:01.649 "data_offset": 0, 00:29:01.649 "data_size": 65536 00:29:01.649 }, 00:29:01.649 { 00:29:01.649 "name": "BaseBdev3", 00:29:01.649 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:29:01.649 "is_configured": true, 00:29:01.649 "data_offset": 0, 00:29:01.649 "data_size": 65536 00:29:01.649 }, 00:29:01.649 { 00:29:01.649 "name": "BaseBdev4", 00:29:01.649 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:29:01.649 "is_configured": true, 00:29:01.649 "data_offset": 0, 00:29:01.649 "data_size": 65536 00:29:01.649 } 00:29:01.649 ] 00:29:01.649 }' 00:29:01.649 00:42:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:01.907 00:42:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:01.907 00:42:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:01.907 00:42:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:01.907 00:42:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:02.901 00:42:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:02.901 00:42:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:02.901 00:42:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:02.901 00:42:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:02.901 00:42:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:02.901 00:42:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:02.901 00:42:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.901 00:42:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.159 00:42:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:03.159 "name": "raid_bdev1", 00:29:03.159 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:29:03.159 "strip_size_kb": 64, 00:29:03.159 "state": "online", 00:29:03.159 "raid_level": "raid5f", 00:29:03.159 "superblock": false, 00:29:03.159 "num_base_bdevs": 4, 00:29:03.159 "num_base_bdevs_discovered": 4, 00:29:03.159 "num_base_bdevs_operational": 4, 00:29:03.159 "process": { 00:29:03.159 "type": "rebuild", 00:29:03.159 "target": "spare", 00:29:03.159 "progress": { 00:29:03.159 "blocks": 161280, 00:29:03.159 "percent": 82 00:29:03.159 } 00:29:03.159 }, 00:29:03.159 "base_bdevs_list": [ 00:29:03.159 { 00:29:03.159 "name": "spare", 00:29:03.159 "uuid": "0d3c9867-aad0-54fd-b8e8-2c7d50bd8659", 00:29:03.159 "is_configured": true, 00:29:03.159 "data_offset": 0, 00:29:03.159 "data_size": 65536 00:29:03.159 }, 00:29:03.159 { 00:29:03.159 "name": "BaseBdev2", 00:29:03.159 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:29:03.159 "is_configured": true, 00:29:03.159 "data_offset": 0, 00:29:03.159 "data_size": 65536 00:29:03.159 }, 00:29:03.159 { 00:29:03.159 "name": "BaseBdev3", 00:29:03.159 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:29:03.159 "is_configured": true, 00:29:03.159 "data_offset": 0, 00:29:03.159 "data_size": 65536 00:29:03.159 }, 00:29:03.159 { 00:29:03.159 "name": "BaseBdev4", 00:29:03.159 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:29:03.159 "is_configured": true, 00:29:03.159 "data_offset": 0, 00:29:03.159 "data_size": 65536 00:29:03.159 } 00:29:03.159 ] 00:29:03.159 }' 00:29:03.159 00:42:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:03.159 00:42:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:03.159 00:42:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:03.159 00:42:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:03.159 00:42:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:04.554 00:42:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:04.554 00:42:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:04.554 00:42:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:04.554 00:42:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:04.554 00:42:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:04.554 00:42:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:04.554 00:42:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.554 00:42:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.554 00:42:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:04.554 "name": "raid_bdev1", 00:29:04.554 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:29:04.554 "strip_size_kb": 64, 00:29:04.554 "state": "online", 00:29:04.554 "raid_level": "raid5f", 00:29:04.554 "superblock": false, 00:29:04.554 "num_base_bdevs": 4, 00:29:04.554 "num_base_bdevs_discovered": 4, 00:29:04.554 "num_base_bdevs_operational": 4, 00:29:04.554 "process": { 00:29:04.554 "type": "rebuild", 00:29:04.554 "target": "spare", 00:29:04.554 "progress": { 00:29:04.554 "blocks": 188160, 00:29:04.554 "percent": 95 00:29:04.554 } 00:29:04.554 }, 00:29:04.554 "base_bdevs_list": [ 00:29:04.554 { 00:29:04.554 "name": "spare", 00:29:04.554 "uuid": "0d3c9867-aad0-54fd-b8e8-2c7d50bd8659", 00:29:04.554 "is_configured": true, 00:29:04.554 "data_offset": 0, 00:29:04.554 "data_size": 65536 00:29:04.554 }, 00:29:04.554 { 00:29:04.554 "name": "BaseBdev2", 00:29:04.554 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:29:04.554 "is_configured": true, 00:29:04.554 "data_offset": 0, 00:29:04.554 "data_size": 65536 00:29:04.554 }, 00:29:04.554 { 00:29:04.554 "name": "BaseBdev3", 00:29:04.554 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:29:04.554 "is_configured": true, 00:29:04.554 "data_offset": 0, 00:29:04.554 "data_size": 65536 00:29:04.554 }, 00:29:04.554 { 00:29:04.554 "name": "BaseBdev4", 00:29:04.554 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:29:04.554 "is_configured": true, 00:29:04.554 "data_offset": 0, 00:29:04.554 "data_size": 65536 00:29:04.554 } 00:29:04.554 ] 00:29:04.554 }' 00:29:04.554 00:42:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:04.554 00:42:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:04.554 00:42:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:04.554 00:42:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:04.554 00:42:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:05.123 [2024-04-24 00:42:58.636745] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:05.123 [2024-04-24 00:42:58.637103] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:05.123 [2024-04-24 00:42:58.637357] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:05.689 00:42:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:05.689 00:42:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:05.689 00:42:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:05.689 00:42:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:05.689 00:42:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:05.689 00:42:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:05.689 00:42:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.689 00:42:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.946 00:42:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:05.946 "name": "raid_bdev1", 00:29:05.946 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:29:05.946 "strip_size_kb": 64, 00:29:05.946 "state": "online", 00:29:05.946 "raid_level": "raid5f", 00:29:05.946 "superblock": false, 00:29:05.946 "num_base_bdevs": 4, 00:29:05.946 "num_base_bdevs_discovered": 4, 00:29:05.946 "num_base_bdevs_operational": 4, 00:29:05.946 "base_bdevs_list": [ 00:29:05.946 { 00:29:05.946 "name": "spare", 00:29:05.947 "uuid": "0d3c9867-aad0-54fd-b8e8-2c7d50bd8659", 00:29:05.947 "is_configured": true, 00:29:05.947 "data_offset": 0, 00:29:05.947 "data_size": 65536 00:29:05.947 }, 00:29:05.947 { 00:29:05.947 "name": "BaseBdev2", 00:29:05.947 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:29:05.947 "is_configured": true, 00:29:05.947 "data_offset": 0, 00:29:05.947 "data_size": 65536 00:29:05.947 }, 00:29:05.947 { 00:29:05.947 "name": "BaseBdev3", 00:29:05.947 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:29:05.947 "is_configured": true, 00:29:05.947 "data_offset": 0, 00:29:05.947 "data_size": 65536 00:29:05.947 }, 00:29:05.947 { 00:29:05.947 "name": "BaseBdev4", 00:29:05.947 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:29:05.947 "is_configured": true, 00:29:05.947 "data_offset": 0, 00:29:05.947 "data_size": 65536 00:29:05.947 } 00:29:05.947 ] 00:29:05.947 }' 00:29:05.947 00:42:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:05.947 00:42:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:05.947 00:42:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:05.947 00:42:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:29:05.947 00:42:59 -- bdev/bdev_raid.sh@660 -- # break 00:29:05.947 00:42:59 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:05.947 00:42:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:05.947 00:42:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:29:05.947 00:42:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:29:05.947 00:42:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:05.947 00:42:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.947 00:42:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.204 00:42:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:06.204 "name": "raid_bdev1", 00:29:06.204 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:29:06.204 "strip_size_kb": 64, 00:29:06.204 "state": "online", 00:29:06.204 "raid_level": "raid5f", 00:29:06.204 "superblock": false, 00:29:06.204 "num_base_bdevs": 4, 00:29:06.204 "num_base_bdevs_discovered": 4, 00:29:06.204 "num_base_bdevs_operational": 4, 00:29:06.204 "base_bdevs_list": [ 00:29:06.204 { 00:29:06.204 "name": "spare", 00:29:06.204 "uuid": "0d3c9867-aad0-54fd-b8e8-2c7d50bd8659", 00:29:06.204 "is_configured": true, 00:29:06.204 "data_offset": 0, 00:29:06.204 "data_size": 65536 00:29:06.204 }, 00:29:06.204 { 00:29:06.204 "name": "BaseBdev2", 00:29:06.204 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:29:06.204 "is_configured": true, 00:29:06.204 "data_offset": 0, 00:29:06.204 "data_size": 65536 00:29:06.204 }, 00:29:06.204 { 00:29:06.204 "name": "BaseBdev3", 00:29:06.204 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:29:06.204 "is_configured": true, 00:29:06.204 "data_offset": 0, 00:29:06.204 "data_size": 65536 00:29:06.204 }, 00:29:06.204 { 00:29:06.204 "name": "BaseBdev4", 00:29:06.204 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:29:06.204 "is_configured": true, 00:29:06.204 "data_offset": 0, 00:29:06.204 "data_size": 65536 00:29:06.204 } 00:29:06.204 ] 00:29:06.204 }' 00:29:06.204 00:42:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:06.204 00:42:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:06.204 00:42:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:06.463 00:43:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:29:06.463 00:43:00 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:06.463 00:43:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:06.463 00:43:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:06.463 00:43:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:29:06.463 00:43:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:06.463 00:43:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:06.463 00:43:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:06.463 00:43:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:06.463 00:43:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:06.463 00:43:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:06.463 00:43:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.463 00:43:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.722 00:43:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:06.722 "name": "raid_bdev1", 00:29:06.722 "uuid": "1665c00d-7725-4249-8ce4-088c4e7456bd", 00:29:06.722 "strip_size_kb": 64, 00:29:06.722 "state": "online", 00:29:06.722 "raid_level": "raid5f", 00:29:06.722 "superblock": false, 00:29:06.722 "num_base_bdevs": 4, 00:29:06.722 "num_base_bdevs_discovered": 4, 00:29:06.722 "num_base_bdevs_operational": 4, 00:29:06.722 "base_bdevs_list": [ 00:29:06.722 { 00:29:06.722 "name": "spare", 00:29:06.722 "uuid": "0d3c9867-aad0-54fd-b8e8-2c7d50bd8659", 00:29:06.722 "is_configured": true, 00:29:06.722 "data_offset": 0, 00:29:06.722 "data_size": 65536 00:29:06.722 }, 00:29:06.722 { 00:29:06.722 "name": "BaseBdev2", 00:29:06.722 "uuid": "7de57ec3-0f3b-4e29-8df4-c0ccce84de31", 00:29:06.722 "is_configured": true, 00:29:06.722 "data_offset": 0, 00:29:06.722 "data_size": 65536 00:29:06.722 }, 00:29:06.722 { 00:29:06.722 "name": "BaseBdev3", 00:29:06.722 "uuid": "189b336a-ce1f-47ad-9855-17715950d6b8", 00:29:06.722 "is_configured": true, 00:29:06.722 "data_offset": 0, 00:29:06.722 "data_size": 65536 00:29:06.722 }, 00:29:06.722 { 00:29:06.722 "name": "BaseBdev4", 00:29:06.722 "uuid": "b291ac14-e1c6-4ea5-9710-98e55452a7e3", 00:29:06.722 "is_configured": true, 00:29:06.722 "data_offset": 0, 00:29:06.722 "data_size": 65536 00:29:06.722 } 00:29:06.722 ] 00:29:06.722 }' 00:29:06.722 00:43:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:06.722 00:43:00 -- common/autotest_common.sh@10 -- # set +x 00:29:07.287 00:43:00 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:07.545 [2024-04-24 00:43:01.182906] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:07.545 [2024-04-24 00:43:01.183226] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:07.545 [2024-04-24 00:43:01.183462] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:07.545 [2024-04-24 00:43:01.183696] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:07.545 [2024-04-24 00:43:01.183833] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:29:07.545 00:43:01 -- bdev/bdev_raid.sh@671 -- # jq length 00:29:07.545 00:43:01 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.804 00:43:01 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:29:07.804 00:43:01 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:29:07.805 00:43:01 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:07.805 00:43:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:07.805 00:43:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:07.805 00:43:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:07.805 00:43:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:07.805 00:43:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:07.805 00:43:01 -- bdev/nbd_common.sh@12 -- # local i 00:29:07.805 00:43:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:07.805 00:43:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:07.805 00:43:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:08.063 /dev/nbd0 00:29:08.063 00:43:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:08.063 00:43:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:08.063 00:43:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:29:08.063 00:43:01 -- common/autotest_common.sh@855 -- # local i 00:29:08.063 00:43:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:08.063 00:43:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:08.063 00:43:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:29:08.063 00:43:01 -- common/autotest_common.sh@859 -- # break 00:29:08.063 00:43:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:08.063 00:43:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:08.063 00:43:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:08.063 1+0 records in 00:29:08.063 1+0 records out 00:29:08.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481681 s, 8.5 MB/s 00:29:08.063 00:43:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.063 00:43:01 -- common/autotest_common.sh@872 -- # size=4096 00:29:08.063 00:43:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.063 00:43:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:08.063 00:43:01 -- common/autotest_common.sh@875 -- # return 0 00:29:08.063 00:43:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:08.063 00:43:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:08.063 00:43:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:08.342 /dev/nbd1 00:29:08.342 00:43:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:08.342 00:43:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:08.342 00:43:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:29:08.342 00:43:02 -- common/autotest_common.sh@855 -- # local i 00:29:08.342 00:43:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:08.342 00:43:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:08.342 00:43:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:29:08.599 00:43:02 -- common/autotest_common.sh@859 -- # break 00:29:08.599 00:43:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:08.599 00:43:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:08.599 00:43:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:08.599 1+0 records in 00:29:08.599 1+0 records out 00:29:08.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004395 s, 9.3 MB/s 00:29:08.599 00:43:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.599 00:43:02 -- common/autotest_common.sh@872 -- # size=4096 00:29:08.599 00:43:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:08.599 00:43:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:08.599 00:43:02 -- common/autotest_common.sh@875 -- # return 0 00:29:08.599 00:43:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:08.599 00:43:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:08.599 00:43:02 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:08.857 00:43:02 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:08.857 00:43:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:08.857 00:43:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:08.857 00:43:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:08.857 00:43:02 -- bdev/nbd_common.sh@51 -- # local i 00:29:08.857 00:43:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:08.857 00:43:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:08.857 00:43:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:08.857 00:43:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:08.857 00:43:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:08.857 00:43:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:08.857 00:43:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:08.857 00:43:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:09.115 00:43:02 -- bdev/nbd_common.sh@41 -- # break 00:29:09.115 00:43:02 -- bdev/nbd_common.sh@45 -- # return 0 00:29:09.115 00:43:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:09.115 00:43:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:09.374 00:43:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:09.374 00:43:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:09.374 00:43:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:09.374 00:43:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:09.374 00:43:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:09.374 00:43:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:09.374 00:43:02 -- bdev/nbd_common.sh@41 -- # break 00:29:09.374 00:43:02 -- bdev/nbd_common.sh@45 -- # return 0 00:29:09.374 00:43:02 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:29:09.374 00:43:02 -- bdev/bdev_raid.sh@709 -- # killprocess 140304 00:29:09.374 00:43:02 -- common/autotest_common.sh@936 -- # '[' -z 140304 ']' 00:29:09.374 00:43:02 -- common/autotest_common.sh@940 -- # kill -0 140304 00:29:09.374 00:43:02 -- common/autotest_common.sh@941 -- # uname 00:29:09.374 00:43:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:09.374 00:43:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140304 00:29:09.374 00:43:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:09.374 killing process with pid 140304 00:29:09.374 00:43:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:09.374 00:43:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140304' 00:29:09.374 00:43:02 -- common/autotest_common.sh@955 -- # kill 140304 00:29:09.374 Received shutdown signal, test time was about 60.000000 seconds 00:29:09.374 00:29:09.374 Latency(us) 00:29:09.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.374 =================================================================================================================== 00:29:09.374 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:09.374 00:43:02 -- common/autotest_common.sh@960 -- # wait 140304 00:29:09.374 [2024-04-24 00:43:02.959972] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:09.941 [2024-04-24 00:43:03.513433] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:11.315 ************************************ 00:29:11.315 END TEST raid5f_rebuild_test 00:29:11.315 ************************************ 00:29:11.315 00:43:04 -- bdev/bdev_raid.sh@711 -- # return 0 00:29:11.315 00:29:11.315 real 0m27.078s 00:29:11.315 user 0m38.774s 00:29:11.315 sys 0m3.544s 00:29:11.315 00:43:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:11.315 00:43:04 -- common/autotest_common.sh@10 -- # set +x 00:29:11.315 00:43:05 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:29:11.316 00:43:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:29:11.316 00:43:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:11.316 00:43:05 -- common/autotest_common.sh@10 -- # set +x 00:29:11.316 ************************************ 00:29:11.316 START TEST raid5f_rebuild_test_sb 00:29:11.316 ************************************ 00:29:11.316 00:43:05 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 4 true false 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@544 -- # raid_pid=140944 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@545 -- # waitforlisten 140944 /var/tmp/spdk-raid.sock 00:29:11.316 00:43:05 -- common/autotest_common.sh@817 -- # '[' -z 140944 ']' 00:29:11.316 00:43:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:11.316 00:43:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:11.316 00:43:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:11.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:11.316 00:43:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:11.316 00:43:05 -- common/autotest_common.sh@10 -- # set +x 00:29:11.316 00:43:05 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:11.574 [2024-04-24 00:43:05.193763] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:29:11.574 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:11.574 Zero copy mechanism will not be used. 00:29:11.574 [2024-04-24 00:43:05.193983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140944 ] 00:29:11.831 [2024-04-24 00:43:05.390768] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.099 [2024-04-24 00:43:05.721697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.368 [2024-04-24 00:43:05.992175] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:12.629 00:43:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:12.629 00:43:06 -- common/autotest_common.sh@850 -- # return 0 00:29:12.629 00:43:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:29:12.629 00:43:06 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:29:12.629 00:43:06 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:12.888 BaseBdev1_malloc 00:29:12.888 00:43:06 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:13.146 [2024-04-24 00:43:06.858226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:13.146 [2024-04-24 00:43:06.858368] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.146 [2024-04-24 00:43:06.858411] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:29:13.146 [2024-04-24 00:43:06.858470] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.146 [2024-04-24 00:43:06.861383] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.146 [2024-04-24 00:43:06.861472] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:13.146 BaseBdev1 00:29:13.146 00:43:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:29:13.146 00:43:06 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:29:13.146 00:43:06 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:13.403 BaseBdev2_malloc 00:29:13.661 00:43:07 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:13.920 [2024-04-24 00:43:07.484392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:13.920 [2024-04-24 00:43:07.484536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.920 [2024-04-24 00:43:07.484613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:13.920 [2024-04-24 00:43:07.484703] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.920 [2024-04-24 00:43:07.488079] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.920 [2024-04-24 00:43:07.488216] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:13.920 BaseBdev2 00:29:13.920 00:43:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:29:13.920 00:43:07 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:29:13.920 00:43:07 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:14.178 BaseBdev3_malloc 00:29:14.178 00:43:07 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:14.436 [2024-04-24 00:43:08.045968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:14.436 [2024-04-24 00:43:08.046079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:14.436 [2024-04-24 00:43:08.046125] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:29:14.436 [2024-04-24 00:43:08.046171] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:14.436 [2024-04-24 00:43:08.048855] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:14.436 [2024-04-24 00:43:08.048932] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:14.436 BaseBdev3 00:29:14.436 00:43:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:29:14.436 00:43:08 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:29:14.436 00:43:08 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:14.693 BaseBdev4_malloc 00:29:14.693 00:43:08 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:14.951 [2024-04-24 00:43:08.693211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:14.952 [2024-04-24 00:43:08.693367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:14.952 [2024-04-24 00:43:08.693430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:29:14.952 [2024-04-24 00:43:08.693493] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:14.952 [2024-04-24 00:43:08.696819] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:14.952 [2024-04-24 00:43:08.696937] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:14.952 BaseBdev4 00:29:14.952 00:43:08 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:15.518 spare_malloc 00:29:15.518 00:43:09 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:15.518 spare_delay 00:29:15.518 00:43:09 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:16.083 [2024-04-24 00:43:09.573267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:16.083 [2024-04-24 00:43:09.573404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.083 [2024-04-24 00:43:09.573469] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:29:16.083 [2024-04-24 00:43:09.573534] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.083 [2024-04-24 00:43:09.576545] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.083 [2024-04-24 00:43:09.576650] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:16.083 spare 00:29:16.083 00:43:09 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:16.083 [2024-04-24 00:43:09.861629] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:16.083 [2024-04-24 00:43:09.864744] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:16.083 [2024-04-24 00:43:09.864899] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:16.083 [2024-04-24 00:43:09.864989] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:16.083 [2024-04-24 00:43:09.865330] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:29:16.083 [2024-04-24 00:43:09.865365] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:16.083 [2024-04-24 00:43:09.865581] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:29:16.083 [2024-04-24 00:43:09.874975] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:29:16.083 [2024-04-24 00:43:09.875051] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:29:16.083 [2024-04-24 00:43:09.875409] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:16.341 00:43:09 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:16.341 00:43:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:16.341 00:43:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:16.341 00:43:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:29:16.341 00:43:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:16.341 00:43:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:16.341 00:43:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:16.341 00:43:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:16.341 00:43:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:16.341 00:43:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:16.341 00:43:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.341 00:43:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.598 00:43:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:16.598 "name": "raid_bdev1", 00:29:16.598 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:16.598 "strip_size_kb": 64, 00:29:16.598 "state": "online", 00:29:16.598 "raid_level": "raid5f", 00:29:16.598 "superblock": true, 00:29:16.598 "num_base_bdevs": 4, 00:29:16.598 "num_base_bdevs_discovered": 4, 00:29:16.598 "num_base_bdevs_operational": 4, 00:29:16.598 "base_bdevs_list": [ 00:29:16.598 { 00:29:16.598 "name": "BaseBdev1", 00:29:16.598 "uuid": "c2207cae-9643-5393-88a5-5545f56a546e", 00:29:16.598 "is_configured": true, 00:29:16.598 "data_offset": 2048, 00:29:16.598 "data_size": 63488 00:29:16.598 }, 00:29:16.598 { 00:29:16.598 "name": "BaseBdev2", 00:29:16.598 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:16.598 "is_configured": true, 00:29:16.598 "data_offset": 2048, 00:29:16.598 "data_size": 63488 00:29:16.598 }, 00:29:16.598 { 00:29:16.598 "name": "BaseBdev3", 00:29:16.598 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:16.598 "is_configured": true, 00:29:16.598 "data_offset": 2048, 00:29:16.598 "data_size": 63488 00:29:16.598 }, 00:29:16.598 { 00:29:16.598 "name": "BaseBdev4", 00:29:16.598 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:16.598 "is_configured": true, 00:29:16.598 "data_offset": 2048, 00:29:16.598 "data_size": 63488 00:29:16.598 } 00:29:16.598 ] 00:29:16.598 }' 00:29:16.598 00:43:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:16.598 00:43:10 -- common/autotest_common.sh@10 -- # set +x 00:29:17.166 00:43:10 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:17.166 00:43:10 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:29:17.438 [2024-04-24 00:43:11.141418] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:17.438 00:43:11 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:29:17.438 00:43:11 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.438 00:43:11 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:18.004 00:43:11 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:29:18.004 00:43:11 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:29:18.004 00:43:11 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:29:18.004 00:43:11 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:18.004 00:43:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:18.004 00:43:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:18.004 00:43:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:18.004 00:43:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:18.004 00:43:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:18.004 00:43:11 -- bdev/nbd_common.sh@12 -- # local i 00:29:18.004 00:43:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:18.004 00:43:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:18.004 00:43:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:18.004 [2024-04-24 00:43:11.709392] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:18.004 /dev/nbd0 00:29:18.004 00:43:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:18.004 00:43:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:18.004 00:43:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:29:18.004 00:43:11 -- common/autotest_common.sh@855 -- # local i 00:29:18.004 00:43:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:18.004 00:43:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:18.004 00:43:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:29:18.004 00:43:11 -- common/autotest_common.sh@859 -- # break 00:29:18.004 00:43:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:18.004 00:43:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:18.004 00:43:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:18.004 1+0 records in 00:29:18.004 1+0 records out 00:29:18.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704452 s, 5.8 MB/s 00:29:18.004 00:43:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:18.004 00:43:11 -- common/autotest_common.sh@872 -- # size=4096 00:29:18.004 00:43:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:18.004 00:43:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:18.004 00:43:11 -- common/autotest_common.sh@875 -- # return 0 00:29:18.004 00:43:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:18.004 00:43:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:18.004 00:43:11 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:29:18.004 00:43:11 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:29:18.004 00:43:11 -- bdev/bdev_raid.sh@582 -- # echo 192 00:29:18.004 00:43:11 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:29:18.938 496+0 records in 00:29:18.938 496+0 records out 00:29:18.938 97517568 bytes (98 MB, 93 MiB) copied, 0.719134 s, 136 MB/s 00:29:18.938 00:43:12 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:18.938 00:43:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:18.938 00:43:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:18.938 00:43:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:18.938 00:43:12 -- bdev/nbd_common.sh@51 -- # local i 00:29:18.938 00:43:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:18.938 00:43:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:19.209 [2024-04-24 00:43:12.845778] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:19.209 00:43:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:19.209 00:43:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:19.209 00:43:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:19.209 00:43:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:19.209 00:43:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:19.209 00:43:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:19.209 00:43:12 -- bdev/nbd_common.sh@41 -- # break 00:29:19.209 00:43:12 -- bdev/nbd_common.sh@45 -- # return 0 00:29:19.209 00:43:12 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:19.469 [2024-04-24 00:43:13.132915] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:19.469 00:43:13 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:19.469 00:43:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:19.469 00:43:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:19.469 00:43:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:29:19.469 00:43:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:19.469 00:43:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:19.469 00:43:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:19.469 00:43:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:19.469 00:43:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:19.469 00:43:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:19.469 00:43:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.469 00:43:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.727 00:43:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:19.727 "name": "raid_bdev1", 00:29:19.727 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:19.727 "strip_size_kb": 64, 00:29:19.727 "state": "online", 00:29:19.727 "raid_level": "raid5f", 00:29:19.727 "superblock": true, 00:29:19.727 "num_base_bdevs": 4, 00:29:19.727 "num_base_bdevs_discovered": 3, 00:29:19.727 "num_base_bdevs_operational": 3, 00:29:19.727 "base_bdevs_list": [ 00:29:19.727 { 00:29:19.727 "name": null, 00:29:19.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:19.727 "is_configured": false, 00:29:19.727 "data_offset": 2048, 00:29:19.727 "data_size": 63488 00:29:19.727 }, 00:29:19.727 { 00:29:19.727 "name": "BaseBdev2", 00:29:19.727 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:19.727 "is_configured": true, 00:29:19.727 "data_offset": 2048, 00:29:19.727 "data_size": 63488 00:29:19.727 }, 00:29:19.727 { 00:29:19.727 "name": "BaseBdev3", 00:29:19.727 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:19.727 "is_configured": true, 00:29:19.727 "data_offset": 2048, 00:29:19.728 "data_size": 63488 00:29:19.728 }, 00:29:19.728 { 00:29:19.728 "name": "BaseBdev4", 00:29:19.728 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:19.728 "is_configured": true, 00:29:19.728 "data_offset": 2048, 00:29:19.728 "data_size": 63488 00:29:19.728 } 00:29:19.728 ] 00:29:19.728 }' 00:29:19.728 00:43:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:19.728 00:43:13 -- common/autotest_common.sh@10 -- # set +x 00:29:20.295 00:43:13 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:20.553 [2024-04-24 00:43:14.175419] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:29:20.553 [2024-04-24 00:43:14.175485] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:20.553 [2024-04-24 00:43:14.195448] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a710 00:29:20.553 [2024-04-24 00:43:14.207922] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:20.553 00:43:14 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:29:21.494 00:43:15 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:21.494 00:43:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:21.494 00:43:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:21.494 00:43:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:21.494 00:43:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:21.494 00:43:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.494 00:43:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.754 00:43:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:21.754 "name": "raid_bdev1", 00:29:21.754 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:21.754 "strip_size_kb": 64, 00:29:21.754 "state": "online", 00:29:21.754 "raid_level": "raid5f", 00:29:21.754 "superblock": true, 00:29:21.754 "num_base_bdevs": 4, 00:29:21.754 "num_base_bdevs_discovered": 4, 00:29:21.754 "num_base_bdevs_operational": 4, 00:29:21.754 "process": { 00:29:21.754 "type": "rebuild", 00:29:21.754 "target": "spare", 00:29:21.754 "progress": { 00:29:21.754 "blocks": 23040, 00:29:21.754 "percent": 12 00:29:21.754 } 00:29:21.754 }, 00:29:21.754 "base_bdevs_list": [ 00:29:21.754 { 00:29:21.754 "name": "spare", 00:29:21.754 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:21.754 "is_configured": true, 00:29:21.754 "data_offset": 2048, 00:29:21.754 "data_size": 63488 00:29:21.754 }, 00:29:21.754 { 00:29:21.754 "name": "BaseBdev2", 00:29:21.754 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:21.754 "is_configured": true, 00:29:21.754 "data_offset": 2048, 00:29:21.754 "data_size": 63488 00:29:21.754 }, 00:29:21.754 { 00:29:21.754 "name": "BaseBdev3", 00:29:21.754 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:21.754 "is_configured": true, 00:29:21.754 "data_offset": 2048, 00:29:21.754 "data_size": 63488 00:29:21.754 }, 00:29:21.754 { 00:29:21.754 "name": "BaseBdev4", 00:29:21.754 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:21.754 "is_configured": true, 00:29:21.754 "data_offset": 2048, 00:29:21.754 "data_size": 63488 00:29:21.754 } 00:29:21.754 ] 00:29:21.754 }' 00:29:21.754 00:43:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:21.754 00:43:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:21.754 00:43:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:22.013 00:43:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:22.013 00:43:15 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:22.272 [2024-04-24 00:43:15.862820] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:22.272 [2024-04-24 00:43:15.924823] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:22.272 [2024-04-24 00:43:15.924943] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:22.272 00:43:15 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:22.272 00:43:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:22.272 00:43:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:22.272 00:43:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:29:22.272 00:43:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:22.272 00:43:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:29:22.272 00:43:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:22.272 00:43:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:22.272 00:43:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:22.272 00:43:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:22.272 00:43:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.272 00:43:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.530 00:43:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:22.530 "name": "raid_bdev1", 00:29:22.530 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:22.530 "strip_size_kb": 64, 00:29:22.530 "state": "online", 00:29:22.530 "raid_level": "raid5f", 00:29:22.530 "superblock": true, 00:29:22.530 "num_base_bdevs": 4, 00:29:22.530 "num_base_bdevs_discovered": 3, 00:29:22.530 "num_base_bdevs_operational": 3, 00:29:22.530 "base_bdevs_list": [ 00:29:22.530 { 00:29:22.530 "name": null, 00:29:22.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.530 "is_configured": false, 00:29:22.530 "data_offset": 2048, 00:29:22.530 "data_size": 63488 00:29:22.530 }, 00:29:22.530 { 00:29:22.530 "name": "BaseBdev2", 00:29:22.530 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:22.530 "is_configured": true, 00:29:22.530 "data_offset": 2048, 00:29:22.530 "data_size": 63488 00:29:22.530 }, 00:29:22.530 { 00:29:22.530 "name": "BaseBdev3", 00:29:22.530 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:22.530 "is_configured": true, 00:29:22.530 "data_offset": 2048, 00:29:22.530 "data_size": 63488 00:29:22.530 }, 00:29:22.530 { 00:29:22.530 "name": "BaseBdev4", 00:29:22.530 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:22.530 "is_configured": true, 00:29:22.530 "data_offset": 2048, 00:29:22.530 "data_size": 63488 00:29:22.530 } 00:29:22.530 ] 00:29:22.530 }' 00:29:22.530 00:43:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:22.530 00:43:16 -- common/autotest_common.sh@10 -- # set +x 00:29:23.097 00:43:16 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:23.097 00:43:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:23.097 00:43:16 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:29:23.098 00:43:16 -- bdev/bdev_raid.sh@185 -- # local target=none 00:29:23.098 00:43:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:23.098 00:43:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.098 00:43:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.355 00:43:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:23.355 "name": "raid_bdev1", 00:29:23.355 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:23.355 "strip_size_kb": 64, 00:29:23.355 "state": "online", 00:29:23.355 "raid_level": "raid5f", 00:29:23.355 "superblock": true, 00:29:23.355 "num_base_bdevs": 4, 00:29:23.355 "num_base_bdevs_discovered": 3, 00:29:23.355 "num_base_bdevs_operational": 3, 00:29:23.355 "base_bdevs_list": [ 00:29:23.355 { 00:29:23.355 "name": null, 00:29:23.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.355 "is_configured": false, 00:29:23.355 "data_offset": 2048, 00:29:23.355 "data_size": 63488 00:29:23.355 }, 00:29:23.355 { 00:29:23.355 "name": "BaseBdev2", 00:29:23.355 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:23.355 "is_configured": true, 00:29:23.355 "data_offset": 2048, 00:29:23.355 "data_size": 63488 00:29:23.355 }, 00:29:23.355 { 00:29:23.355 "name": "BaseBdev3", 00:29:23.355 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:23.355 "is_configured": true, 00:29:23.355 "data_offset": 2048, 00:29:23.355 "data_size": 63488 00:29:23.355 }, 00:29:23.355 { 00:29:23.355 "name": "BaseBdev4", 00:29:23.355 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:23.355 "is_configured": true, 00:29:23.355 "data_offset": 2048, 00:29:23.355 "data_size": 63488 00:29:23.355 } 00:29:23.355 ] 00:29:23.355 }' 00:29:23.355 00:43:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:23.613 00:43:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:23.613 00:43:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:23.613 00:43:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:29:23.613 00:43:17 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:23.871 [2024-04-24 00:43:17.493580] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:29:23.871 [2024-04-24 00:43:17.493648] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:23.871 [2024-04-24 00:43:17.511280] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:29:23.871 [2024-04-24 00:43:17.523331] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:23.871 00:43:17 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:29:24.804 00:43:18 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:24.804 00:43:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:24.804 00:43:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:24.804 00:43:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:24.804 00:43:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:24.804 00:43:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.804 00:43:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.061 00:43:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:25.062 "name": "raid_bdev1", 00:29:25.062 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:25.062 "strip_size_kb": 64, 00:29:25.062 "state": "online", 00:29:25.062 "raid_level": "raid5f", 00:29:25.062 "superblock": true, 00:29:25.062 "num_base_bdevs": 4, 00:29:25.062 "num_base_bdevs_discovered": 4, 00:29:25.062 "num_base_bdevs_operational": 4, 00:29:25.062 "process": { 00:29:25.062 "type": "rebuild", 00:29:25.062 "target": "spare", 00:29:25.062 "progress": { 00:29:25.062 "blocks": 23040, 00:29:25.062 "percent": 12 00:29:25.062 } 00:29:25.062 }, 00:29:25.062 "base_bdevs_list": [ 00:29:25.062 { 00:29:25.062 "name": "spare", 00:29:25.062 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:25.062 "is_configured": true, 00:29:25.062 "data_offset": 2048, 00:29:25.062 "data_size": 63488 00:29:25.062 }, 00:29:25.062 { 00:29:25.062 "name": "BaseBdev2", 00:29:25.062 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:25.062 "is_configured": true, 00:29:25.062 "data_offset": 2048, 00:29:25.062 "data_size": 63488 00:29:25.062 }, 00:29:25.062 { 00:29:25.062 "name": "BaseBdev3", 00:29:25.062 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:25.062 "is_configured": true, 00:29:25.062 "data_offset": 2048, 00:29:25.062 "data_size": 63488 00:29:25.062 }, 00:29:25.062 { 00:29:25.062 "name": "BaseBdev4", 00:29:25.062 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:25.062 "is_configured": true, 00:29:25.062 "data_offset": 2048, 00:29:25.062 "data_size": 63488 00:29:25.062 } 00:29:25.062 ] 00:29:25.062 }' 00:29:25.062 00:43:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:25.319 00:43:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:25.319 00:43:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:25.319 00:43:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:25.319 00:43:18 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:29:25.319 00:43:18 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:29:25.319 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:29:25.319 00:43:18 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:29:25.319 00:43:18 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:29:25.320 00:43:18 -- bdev/bdev_raid.sh@657 -- # local timeout=821 00:29:25.320 00:43:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:25.320 00:43:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:25.320 00:43:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:25.320 00:43:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:25.320 00:43:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:25.320 00:43:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:25.320 00:43:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.320 00:43:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.577 00:43:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:25.577 "name": "raid_bdev1", 00:29:25.577 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:25.577 "strip_size_kb": 64, 00:29:25.577 "state": "online", 00:29:25.577 "raid_level": "raid5f", 00:29:25.577 "superblock": true, 00:29:25.577 "num_base_bdevs": 4, 00:29:25.577 "num_base_bdevs_discovered": 4, 00:29:25.577 "num_base_bdevs_operational": 4, 00:29:25.577 "process": { 00:29:25.577 "type": "rebuild", 00:29:25.577 "target": "spare", 00:29:25.577 "progress": { 00:29:25.577 "blocks": 30720, 00:29:25.578 "percent": 16 00:29:25.578 } 00:29:25.578 }, 00:29:25.578 "base_bdevs_list": [ 00:29:25.578 { 00:29:25.578 "name": "spare", 00:29:25.578 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:25.578 "is_configured": true, 00:29:25.578 "data_offset": 2048, 00:29:25.578 "data_size": 63488 00:29:25.578 }, 00:29:25.578 { 00:29:25.578 "name": "BaseBdev2", 00:29:25.578 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:25.578 "is_configured": true, 00:29:25.578 "data_offset": 2048, 00:29:25.578 "data_size": 63488 00:29:25.578 }, 00:29:25.578 { 00:29:25.578 "name": "BaseBdev3", 00:29:25.578 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:25.578 "is_configured": true, 00:29:25.578 "data_offset": 2048, 00:29:25.578 "data_size": 63488 00:29:25.578 }, 00:29:25.578 { 00:29:25.578 "name": "BaseBdev4", 00:29:25.578 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:25.578 "is_configured": true, 00:29:25.578 "data_offset": 2048, 00:29:25.578 "data_size": 63488 00:29:25.578 } 00:29:25.578 ] 00:29:25.578 }' 00:29:25.578 00:43:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:25.578 00:43:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:25.578 00:43:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:25.578 00:43:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:25.578 00:43:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:26.531 00:43:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:26.531 00:43:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:26.531 00:43:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:26.532 00:43:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:26.532 00:43:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:26.532 00:43:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:26.532 00:43:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.532 00:43:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.795 00:43:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:26.795 "name": "raid_bdev1", 00:29:26.795 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:26.795 "strip_size_kb": 64, 00:29:26.795 "state": "online", 00:29:26.795 "raid_level": "raid5f", 00:29:26.795 "superblock": true, 00:29:26.795 "num_base_bdevs": 4, 00:29:26.795 "num_base_bdevs_discovered": 4, 00:29:26.795 "num_base_bdevs_operational": 4, 00:29:26.795 "process": { 00:29:26.795 "type": "rebuild", 00:29:26.795 "target": "spare", 00:29:26.795 "progress": { 00:29:26.795 "blocks": 55680, 00:29:26.795 "percent": 29 00:29:26.795 } 00:29:26.795 }, 00:29:26.795 "base_bdevs_list": [ 00:29:26.795 { 00:29:26.795 "name": "spare", 00:29:26.795 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:26.795 "is_configured": true, 00:29:26.795 "data_offset": 2048, 00:29:26.795 "data_size": 63488 00:29:26.795 }, 00:29:26.795 { 00:29:26.795 "name": "BaseBdev2", 00:29:26.795 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:26.795 "is_configured": true, 00:29:26.795 "data_offset": 2048, 00:29:26.795 "data_size": 63488 00:29:26.795 }, 00:29:26.795 { 00:29:26.795 "name": "BaseBdev3", 00:29:26.795 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:26.795 "is_configured": true, 00:29:26.795 "data_offset": 2048, 00:29:26.795 "data_size": 63488 00:29:26.795 }, 00:29:26.795 { 00:29:26.795 "name": "BaseBdev4", 00:29:26.795 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:26.795 "is_configured": true, 00:29:26.795 "data_offset": 2048, 00:29:26.795 "data_size": 63488 00:29:26.795 } 00:29:26.795 ] 00:29:26.795 }' 00:29:26.795 00:43:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:27.052 00:43:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:27.052 00:43:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:27.052 00:43:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:27.052 00:43:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:27.984 00:43:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:27.984 00:43:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:27.984 00:43:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:27.984 00:43:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:27.984 00:43:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:27.984 00:43:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:27.984 00:43:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.984 00:43:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.242 00:43:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:28.242 "name": "raid_bdev1", 00:29:28.242 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:28.242 "strip_size_kb": 64, 00:29:28.242 "state": "online", 00:29:28.242 "raid_level": "raid5f", 00:29:28.242 "superblock": true, 00:29:28.242 "num_base_bdevs": 4, 00:29:28.242 "num_base_bdevs_discovered": 4, 00:29:28.242 "num_base_bdevs_operational": 4, 00:29:28.242 "process": { 00:29:28.242 "type": "rebuild", 00:29:28.242 "target": "spare", 00:29:28.242 "progress": { 00:29:28.242 "blocks": 82560, 00:29:28.242 "percent": 43 00:29:28.242 } 00:29:28.242 }, 00:29:28.242 "base_bdevs_list": [ 00:29:28.242 { 00:29:28.242 "name": "spare", 00:29:28.242 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:28.242 "is_configured": true, 00:29:28.242 "data_offset": 2048, 00:29:28.242 "data_size": 63488 00:29:28.242 }, 00:29:28.242 { 00:29:28.242 "name": "BaseBdev2", 00:29:28.242 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:28.242 "is_configured": true, 00:29:28.242 "data_offset": 2048, 00:29:28.242 "data_size": 63488 00:29:28.242 }, 00:29:28.242 { 00:29:28.242 "name": "BaseBdev3", 00:29:28.242 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:28.242 "is_configured": true, 00:29:28.242 "data_offset": 2048, 00:29:28.242 "data_size": 63488 00:29:28.242 }, 00:29:28.242 { 00:29:28.242 "name": "BaseBdev4", 00:29:28.242 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:28.242 "is_configured": true, 00:29:28.242 "data_offset": 2048, 00:29:28.242 "data_size": 63488 00:29:28.242 } 00:29:28.242 ] 00:29:28.242 }' 00:29:28.242 00:43:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:28.242 00:43:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:28.242 00:43:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:28.242 00:43:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:28.242 00:43:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:29.653 00:43:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:29.653 00:43:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:29.653 00:43:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:29.653 00:43:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:29.653 00:43:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:29.653 00:43:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:29.653 00:43:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.653 00:43:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.653 00:43:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:29.653 "name": "raid_bdev1", 00:29:29.653 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:29.653 "strip_size_kb": 64, 00:29:29.653 "state": "online", 00:29:29.653 "raid_level": "raid5f", 00:29:29.653 "superblock": true, 00:29:29.653 "num_base_bdevs": 4, 00:29:29.653 "num_base_bdevs_discovered": 4, 00:29:29.653 "num_base_bdevs_operational": 4, 00:29:29.653 "process": { 00:29:29.653 "type": "rebuild", 00:29:29.653 "target": "spare", 00:29:29.653 "progress": { 00:29:29.653 "blocks": 109440, 00:29:29.653 "percent": 57 00:29:29.653 } 00:29:29.653 }, 00:29:29.653 "base_bdevs_list": [ 00:29:29.653 { 00:29:29.653 "name": "spare", 00:29:29.653 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:29.653 "is_configured": true, 00:29:29.653 "data_offset": 2048, 00:29:29.653 "data_size": 63488 00:29:29.653 }, 00:29:29.653 { 00:29:29.653 "name": "BaseBdev2", 00:29:29.653 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:29.653 "is_configured": true, 00:29:29.653 "data_offset": 2048, 00:29:29.653 "data_size": 63488 00:29:29.653 }, 00:29:29.653 { 00:29:29.653 "name": "BaseBdev3", 00:29:29.653 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:29.653 "is_configured": true, 00:29:29.653 "data_offset": 2048, 00:29:29.653 "data_size": 63488 00:29:29.653 }, 00:29:29.653 { 00:29:29.653 "name": "BaseBdev4", 00:29:29.653 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:29.653 "is_configured": true, 00:29:29.653 "data_offset": 2048, 00:29:29.653 "data_size": 63488 00:29:29.653 } 00:29:29.653 ] 00:29:29.653 }' 00:29:29.653 00:43:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:29.653 00:43:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:29.653 00:43:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:29.912 00:43:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:29.912 00:43:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:30.846 00:43:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:30.846 00:43:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:30.846 00:43:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:30.846 00:43:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:30.846 00:43:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:30.846 00:43:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:30.846 00:43:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:30.846 00:43:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.104 00:43:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:31.104 "name": "raid_bdev1", 00:29:31.104 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:31.104 "strip_size_kb": 64, 00:29:31.104 "state": "online", 00:29:31.104 "raid_level": "raid5f", 00:29:31.104 "superblock": true, 00:29:31.104 "num_base_bdevs": 4, 00:29:31.104 "num_base_bdevs_discovered": 4, 00:29:31.104 "num_base_bdevs_operational": 4, 00:29:31.104 "process": { 00:29:31.104 "type": "rebuild", 00:29:31.104 "target": "spare", 00:29:31.104 "progress": { 00:29:31.104 "blocks": 136320, 00:29:31.104 "percent": 71 00:29:31.104 } 00:29:31.104 }, 00:29:31.104 "base_bdevs_list": [ 00:29:31.104 { 00:29:31.104 "name": "spare", 00:29:31.104 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:31.104 "is_configured": true, 00:29:31.104 "data_offset": 2048, 00:29:31.104 "data_size": 63488 00:29:31.104 }, 00:29:31.104 { 00:29:31.104 "name": "BaseBdev2", 00:29:31.104 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:31.104 "is_configured": true, 00:29:31.104 "data_offset": 2048, 00:29:31.104 "data_size": 63488 00:29:31.104 }, 00:29:31.104 { 00:29:31.104 "name": "BaseBdev3", 00:29:31.104 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:31.104 "is_configured": true, 00:29:31.104 "data_offset": 2048, 00:29:31.104 "data_size": 63488 00:29:31.104 }, 00:29:31.104 { 00:29:31.104 "name": "BaseBdev4", 00:29:31.104 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:31.104 "is_configured": true, 00:29:31.104 "data_offset": 2048, 00:29:31.104 "data_size": 63488 00:29:31.104 } 00:29:31.104 ] 00:29:31.104 }' 00:29:31.104 00:43:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:31.105 00:43:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:31.105 00:43:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:31.105 00:43:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:31.105 00:43:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:32.480 00:43:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:32.480 00:43:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:32.480 00:43:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:32.480 00:43:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:32.480 00:43:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:32.480 00:43:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:32.480 00:43:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.480 00:43:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.480 00:43:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:32.480 "name": "raid_bdev1", 00:29:32.480 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:32.480 "strip_size_kb": 64, 00:29:32.480 "state": "online", 00:29:32.480 "raid_level": "raid5f", 00:29:32.480 "superblock": true, 00:29:32.480 "num_base_bdevs": 4, 00:29:32.480 "num_base_bdevs_discovered": 4, 00:29:32.480 "num_base_bdevs_operational": 4, 00:29:32.480 "process": { 00:29:32.480 "type": "rebuild", 00:29:32.480 "target": "spare", 00:29:32.480 "progress": { 00:29:32.480 "blocks": 161280, 00:29:32.480 "percent": 84 00:29:32.480 } 00:29:32.480 }, 00:29:32.480 "base_bdevs_list": [ 00:29:32.480 { 00:29:32.480 "name": "spare", 00:29:32.480 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:32.480 "is_configured": true, 00:29:32.480 "data_offset": 2048, 00:29:32.480 "data_size": 63488 00:29:32.480 }, 00:29:32.480 { 00:29:32.480 "name": "BaseBdev2", 00:29:32.480 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:32.480 "is_configured": true, 00:29:32.480 "data_offset": 2048, 00:29:32.481 "data_size": 63488 00:29:32.481 }, 00:29:32.481 { 00:29:32.481 "name": "BaseBdev3", 00:29:32.481 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:32.481 "is_configured": true, 00:29:32.481 "data_offset": 2048, 00:29:32.481 "data_size": 63488 00:29:32.481 }, 00:29:32.481 { 00:29:32.481 "name": "BaseBdev4", 00:29:32.481 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:32.481 "is_configured": true, 00:29:32.481 "data_offset": 2048, 00:29:32.481 "data_size": 63488 00:29:32.481 } 00:29:32.481 ] 00:29:32.481 }' 00:29:32.481 00:43:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:32.481 00:43:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:32.481 00:43:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:32.481 00:43:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:32.481 00:43:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:33.451 00:43:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:33.451 00:43:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:33.451 00:43:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:33.451 00:43:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:33.451 00:43:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:33.451 00:43:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:33.451 00:43:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.451 00:43:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.019 00:43:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:34.019 "name": "raid_bdev1", 00:29:34.019 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:34.019 "strip_size_kb": 64, 00:29:34.019 "state": "online", 00:29:34.019 "raid_level": "raid5f", 00:29:34.019 "superblock": true, 00:29:34.019 "num_base_bdevs": 4, 00:29:34.019 "num_base_bdevs_discovered": 4, 00:29:34.019 "num_base_bdevs_operational": 4, 00:29:34.019 "process": { 00:29:34.019 "type": "rebuild", 00:29:34.019 "target": "spare", 00:29:34.019 "progress": { 00:29:34.019 "blocks": 188160, 00:29:34.019 "percent": 98 00:29:34.019 } 00:29:34.019 }, 00:29:34.019 "base_bdevs_list": [ 00:29:34.019 { 00:29:34.019 "name": "spare", 00:29:34.019 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:34.019 "is_configured": true, 00:29:34.019 "data_offset": 2048, 00:29:34.019 "data_size": 63488 00:29:34.019 }, 00:29:34.019 { 00:29:34.019 "name": "BaseBdev2", 00:29:34.019 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:34.019 "is_configured": true, 00:29:34.019 "data_offset": 2048, 00:29:34.019 "data_size": 63488 00:29:34.019 }, 00:29:34.019 { 00:29:34.019 "name": "BaseBdev3", 00:29:34.019 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:34.019 "is_configured": true, 00:29:34.019 "data_offset": 2048, 00:29:34.019 "data_size": 63488 00:29:34.019 }, 00:29:34.019 { 00:29:34.019 "name": "BaseBdev4", 00:29:34.019 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:34.019 "is_configured": true, 00:29:34.019 "data_offset": 2048, 00:29:34.019 "data_size": 63488 00:29:34.019 } 00:29:34.019 ] 00:29:34.019 }' 00:29:34.019 00:43:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:34.019 00:43:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:34.019 00:43:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:34.019 00:43:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:29:34.019 00:43:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:34.019 [2024-04-24 00:43:27.621110] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:34.019 [2024-04-24 00:43:27.621226] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:34.019 [2024-04-24 00:43:27.621433] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:34.954 00:43:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:29:34.954 00:43:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:34.954 00:43:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:34.954 00:43:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:29:34.954 00:43:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:29:34.954 00:43:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:34.954 00:43:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.954 00:43:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.257 00:43:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:35.257 "name": "raid_bdev1", 00:29:35.257 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:35.257 "strip_size_kb": 64, 00:29:35.257 "state": "online", 00:29:35.257 "raid_level": "raid5f", 00:29:35.257 "superblock": true, 00:29:35.257 "num_base_bdevs": 4, 00:29:35.257 "num_base_bdevs_discovered": 4, 00:29:35.257 "num_base_bdevs_operational": 4, 00:29:35.257 "base_bdevs_list": [ 00:29:35.257 { 00:29:35.257 "name": "spare", 00:29:35.257 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:35.257 "is_configured": true, 00:29:35.257 "data_offset": 2048, 00:29:35.257 "data_size": 63488 00:29:35.257 }, 00:29:35.257 { 00:29:35.257 "name": "BaseBdev2", 00:29:35.257 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:35.257 "is_configured": true, 00:29:35.257 "data_offset": 2048, 00:29:35.257 "data_size": 63488 00:29:35.257 }, 00:29:35.257 { 00:29:35.257 "name": "BaseBdev3", 00:29:35.257 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:35.257 "is_configured": true, 00:29:35.257 "data_offset": 2048, 00:29:35.257 "data_size": 63488 00:29:35.257 }, 00:29:35.257 { 00:29:35.257 "name": "BaseBdev4", 00:29:35.257 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:35.257 "is_configured": true, 00:29:35.257 "data_offset": 2048, 00:29:35.257 "data_size": 63488 00:29:35.257 } 00:29:35.257 ] 00:29:35.257 }' 00:29:35.257 00:43:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:35.257 00:43:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:35.257 00:43:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:35.257 00:43:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:29:35.257 00:43:28 -- bdev/bdev_raid.sh@660 -- # break 00:29:35.257 00:43:28 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:35.257 00:43:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:35.257 00:43:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:29:35.257 00:43:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:29:35.257 00:43:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:35.257 00:43:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.257 00:43:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.516 00:43:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:35.516 "name": "raid_bdev1", 00:29:35.516 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:35.516 "strip_size_kb": 64, 00:29:35.516 "state": "online", 00:29:35.516 "raid_level": "raid5f", 00:29:35.516 "superblock": true, 00:29:35.516 "num_base_bdevs": 4, 00:29:35.516 "num_base_bdevs_discovered": 4, 00:29:35.516 "num_base_bdevs_operational": 4, 00:29:35.516 "base_bdevs_list": [ 00:29:35.516 { 00:29:35.516 "name": "spare", 00:29:35.516 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:35.516 "is_configured": true, 00:29:35.516 "data_offset": 2048, 00:29:35.516 "data_size": 63488 00:29:35.516 }, 00:29:35.516 { 00:29:35.516 "name": "BaseBdev2", 00:29:35.516 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:35.516 "is_configured": true, 00:29:35.516 "data_offset": 2048, 00:29:35.516 "data_size": 63488 00:29:35.516 }, 00:29:35.516 { 00:29:35.516 "name": "BaseBdev3", 00:29:35.516 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:35.516 "is_configured": true, 00:29:35.516 "data_offset": 2048, 00:29:35.516 "data_size": 63488 00:29:35.516 }, 00:29:35.516 { 00:29:35.516 "name": "BaseBdev4", 00:29:35.516 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:35.516 "is_configured": true, 00:29:35.516 "data_offset": 2048, 00:29:35.516 "data_size": 63488 00:29:35.516 } 00:29:35.516 ] 00:29:35.516 }' 00:29:35.516 00:43:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:35.516 00:43:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:35.516 00:43:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:35.775 "name": "raid_bdev1", 00:29:35.775 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:35.775 "strip_size_kb": 64, 00:29:35.775 "state": "online", 00:29:35.775 "raid_level": "raid5f", 00:29:35.775 "superblock": true, 00:29:35.775 "num_base_bdevs": 4, 00:29:35.775 "num_base_bdevs_discovered": 4, 00:29:35.775 "num_base_bdevs_operational": 4, 00:29:35.775 "base_bdevs_list": [ 00:29:35.775 { 00:29:35.775 "name": "spare", 00:29:35.775 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:35.775 "is_configured": true, 00:29:35.775 "data_offset": 2048, 00:29:35.775 "data_size": 63488 00:29:35.775 }, 00:29:35.775 { 00:29:35.775 "name": "BaseBdev2", 00:29:35.775 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:35.775 "is_configured": true, 00:29:35.775 "data_offset": 2048, 00:29:35.775 "data_size": 63488 00:29:35.775 }, 00:29:35.775 { 00:29:35.775 "name": "BaseBdev3", 00:29:35.775 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:35.775 "is_configured": true, 00:29:35.775 "data_offset": 2048, 00:29:35.775 "data_size": 63488 00:29:35.775 }, 00:29:35.775 { 00:29:35.775 "name": "BaseBdev4", 00:29:35.775 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:35.775 "is_configured": true, 00:29:35.775 "data_offset": 2048, 00:29:35.775 "data_size": 63488 00:29:35.775 } 00:29:35.775 ] 00:29:35.775 }' 00:29:35.775 00:43:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:35.775 00:43:29 -- common/autotest_common.sh@10 -- # set +x 00:29:36.341 00:43:30 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:36.599 [2024-04-24 00:43:30.384040] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:36.599 [2024-04-24 00:43:30.384098] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:36.599 [2024-04-24 00:43:30.384209] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:36.599 [2024-04-24 00:43:30.384323] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:36.599 [2024-04-24 00:43:30.384336] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:29:36.860 00:43:30 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.860 00:43:30 -- bdev/bdev_raid.sh@671 -- # jq length 00:29:36.860 00:43:30 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:29:36.860 00:43:30 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:29:36.860 00:43:30 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:36.860 00:43:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:36.860 00:43:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:36.860 00:43:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:36.860 00:43:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:36.860 00:43:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:36.860 00:43:30 -- bdev/nbd_common.sh@12 -- # local i 00:29:36.860 00:43:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:36.860 00:43:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:36.860 00:43:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:37.425 /dev/nbd0 00:29:37.425 00:43:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:37.425 00:43:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:37.425 00:43:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:29:37.425 00:43:30 -- common/autotest_common.sh@855 -- # local i 00:29:37.425 00:43:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:37.425 00:43:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:37.425 00:43:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:29:37.425 00:43:30 -- common/autotest_common.sh@859 -- # break 00:29:37.425 00:43:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:37.425 00:43:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:37.425 00:43:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:37.425 1+0 records in 00:29:37.425 1+0 records out 00:29:37.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710272 s, 5.8 MB/s 00:29:37.425 00:43:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:37.425 00:43:30 -- common/autotest_common.sh@872 -- # size=4096 00:29:37.425 00:43:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:37.425 00:43:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:37.425 00:43:30 -- common/autotest_common.sh@875 -- # return 0 00:29:37.425 00:43:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:37.425 00:43:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:37.425 00:43:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:37.425 /dev/nbd1 00:29:37.682 00:43:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:37.682 00:43:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:37.682 00:43:31 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:29:37.682 00:43:31 -- common/autotest_common.sh@855 -- # local i 00:29:37.682 00:43:31 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:37.682 00:43:31 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:37.682 00:43:31 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:29:37.682 00:43:31 -- common/autotest_common.sh@859 -- # break 00:29:37.682 00:43:31 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:37.682 00:43:31 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:37.682 00:43:31 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:37.682 1+0 records in 00:29:37.682 1+0 records out 00:29:37.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413596 s, 9.9 MB/s 00:29:37.682 00:43:31 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:37.682 00:43:31 -- common/autotest_common.sh@872 -- # size=4096 00:29:37.683 00:43:31 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:37.683 00:43:31 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:37.683 00:43:31 -- common/autotest_common.sh@875 -- # return 0 00:29:37.683 00:43:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:37.683 00:43:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:37.683 00:43:31 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:37.940 00:43:31 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:37.940 00:43:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:37.940 00:43:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:37.940 00:43:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:37.940 00:43:31 -- bdev/nbd_common.sh@51 -- # local i 00:29:37.940 00:43:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:37.940 00:43:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:38.198 00:43:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:38.198 00:43:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:38.198 00:43:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:38.198 00:43:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:38.198 00:43:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:38.198 00:43:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:38.198 00:43:31 -- bdev/nbd_common.sh@41 -- # break 00:29:38.198 00:43:31 -- bdev/nbd_common.sh@45 -- # return 0 00:29:38.198 00:43:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:38.198 00:43:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:38.456 00:43:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:38.456 00:43:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:38.456 00:43:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:38.456 00:43:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:38.456 00:43:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:38.456 00:43:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:38.457 00:43:31 -- bdev/nbd_common.sh@41 -- # break 00:29:38.457 00:43:32 -- bdev/nbd_common.sh@45 -- # return 0 00:29:38.457 00:43:32 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:29:38.457 00:43:32 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:29:38.457 00:43:32 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:29:38.457 00:43:32 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:38.714 00:43:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:38.714 [2024-04-24 00:43:32.466478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:38.714 [2024-04-24 00:43:32.466607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:38.714 [2024-04-24 00:43:32.466657] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:38.714 [2024-04-24 00:43:32.466684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:38.714 [2024-04-24 00:43:32.469674] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:38.714 [2024-04-24 00:43:32.469758] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:38.714 [2024-04-24 00:43:32.469920] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:38.714 [2024-04-24 00:43:32.469973] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:38.714 BaseBdev1 00:29:38.714 00:43:32 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:29:38.714 00:43:32 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:29:38.714 00:43:32 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:29:39.278 00:43:32 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:39.278 [2024-04-24 00:43:33.006651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:39.278 [2024-04-24 00:43:33.006763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.278 [2024-04-24 00:43:33.006819] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:29:39.278 [2024-04-24 00:43:33.006847] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.278 [2024-04-24 00:43:33.007503] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.278 [2024-04-24 00:43:33.007584] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:39.278 [2024-04-24 00:43:33.007724] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:29:39.278 [2024-04-24 00:43:33.007738] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:29:39.278 [2024-04-24 00:43:33.007748] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:39.278 [2024-04-24 00:43:33.007797] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:29:39.278 [2024-04-24 00:43:33.007932] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:39.278 BaseBdev2 00:29:39.278 00:43:33 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:29:39.278 00:43:33 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:29:39.278 00:43:33 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:29:39.597 00:43:33 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:39.856 [2024-04-24 00:43:33.498751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:39.856 [2024-04-24 00:43:33.498888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.856 [2024-04-24 00:43:33.498948] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:29:39.856 [2024-04-24 00:43:33.498982] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.856 [2024-04-24 00:43:33.499577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.856 [2024-04-24 00:43:33.499652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:39.856 [2024-04-24 00:43:33.499793] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:29:39.856 [2024-04-24 00:43:33.499818] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:39.856 BaseBdev3 00:29:39.856 00:43:33 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:29:39.856 00:43:33 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:29:39.856 00:43:33 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:29:40.116 00:43:33 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:40.395 [2024-04-24 00:43:33.974874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:40.395 [2024-04-24 00:43:33.975004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.395 [2024-04-24 00:43:33.975050] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:29:40.395 [2024-04-24 00:43:33.975084] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.395 [2024-04-24 00:43:33.975631] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.395 [2024-04-24 00:43:33.975706] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:40.395 [2024-04-24 00:43:33.975852] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:29:40.395 [2024-04-24 00:43:33.975887] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:40.395 BaseBdev4 00:29:40.395 00:43:33 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:40.655 00:43:34 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:40.913 [2024-04-24 00:43:34.483025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:40.913 [2024-04-24 00:43:34.483152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.913 [2024-04-24 00:43:34.483205] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:29:40.913 [2024-04-24 00:43:34.483250] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.913 [2024-04-24 00:43:34.483867] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.914 [2024-04-24 00:43:34.483937] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:40.914 [2024-04-24 00:43:34.484086] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:29:40.914 [2024-04-24 00:43:34.484114] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:40.914 spare 00:29:40.914 00:43:34 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:29:40.914 00:43:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:29:40.914 00:43:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:29:40.914 00:43:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:29:40.914 00:43:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:29:40.914 00:43:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:29:40.914 00:43:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:29:40.914 00:43:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:29:40.914 00:43:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:29:40.914 00:43:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:29:40.914 00:43:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.914 00:43:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.914 [2024-04-24 00:43:34.584249] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:29:40.914 [2024-04-24 00:43:34.584286] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:29:40.914 [2024-04-24 00:43:34.584479] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049510 00:29:40.914 [2024-04-24 00:43:34.593026] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:29:40.914 [2024-04-24 00:43:34.593061] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:29:40.914 [2024-04-24 00:43:34.593262] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:41.174 00:43:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:29:41.174 "name": "raid_bdev1", 00:29:41.174 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:41.174 "strip_size_kb": 64, 00:29:41.174 "state": "online", 00:29:41.174 "raid_level": "raid5f", 00:29:41.174 "superblock": true, 00:29:41.174 "num_base_bdevs": 4, 00:29:41.174 "num_base_bdevs_discovered": 4, 00:29:41.174 "num_base_bdevs_operational": 4, 00:29:41.174 "base_bdevs_list": [ 00:29:41.174 { 00:29:41.174 "name": "spare", 00:29:41.174 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:41.174 "is_configured": true, 00:29:41.174 "data_offset": 2048, 00:29:41.174 "data_size": 63488 00:29:41.174 }, 00:29:41.174 { 00:29:41.174 "name": "BaseBdev2", 00:29:41.174 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:41.174 "is_configured": true, 00:29:41.174 "data_offset": 2048, 00:29:41.174 "data_size": 63488 00:29:41.174 }, 00:29:41.174 { 00:29:41.174 "name": "BaseBdev3", 00:29:41.174 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:41.174 "is_configured": true, 00:29:41.174 "data_offset": 2048, 00:29:41.174 "data_size": 63488 00:29:41.174 }, 00:29:41.174 { 00:29:41.174 "name": "BaseBdev4", 00:29:41.174 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:41.174 "is_configured": true, 00:29:41.174 "data_offset": 2048, 00:29:41.174 "data_size": 63488 00:29:41.174 } 00:29:41.174 ] 00:29:41.174 }' 00:29:41.174 00:43:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:29:41.174 00:43:34 -- common/autotest_common.sh@10 -- # set +x 00:29:41.739 00:43:35 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:41.739 00:43:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:29:41.739 00:43:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:29:41.739 00:43:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:29:41.739 00:43:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:29:41.739 00:43:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.739 00:43:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.997 00:43:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:29:41.997 "name": "raid_bdev1", 00:29:41.997 "uuid": "91acd245-ec4d-4cea-867d-78dec8b3e46b", 00:29:41.997 "strip_size_kb": 64, 00:29:41.997 "state": "online", 00:29:41.997 "raid_level": "raid5f", 00:29:41.997 "superblock": true, 00:29:41.997 "num_base_bdevs": 4, 00:29:41.997 "num_base_bdevs_discovered": 4, 00:29:41.997 "num_base_bdevs_operational": 4, 00:29:41.997 "base_bdevs_list": [ 00:29:41.997 { 00:29:41.997 "name": "spare", 00:29:41.997 "uuid": "743f109f-931c-5923-827b-1f75c907deb9", 00:29:41.997 "is_configured": true, 00:29:41.997 "data_offset": 2048, 00:29:41.997 "data_size": 63488 00:29:41.997 }, 00:29:41.997 { 00:29:41.997 "name": "BaseBdev2", 00:29:41.997 "uuid": "c2bb94c2-9d80-597a-b506-26a5b3843c09", 00:29:41.997 "is_configured": true, 00:29:41.997 "data_offset": 2048, 00:29:41.997 "data_size": 63488 00:29:41.997 }, 00:29:41.997 { 00:29:41.997 "name": "BaseBdev3", 00:29:41.997 "uuid": "2ca4c473-96f0-5c86-bb61-3b1ab89acbf0", 00:29:41.997 "is_configured": true, 00:29:41.997 "data_offset": 2048, 00:29:41.997 "data_size": 63488 00:29:41.997 }, 00:29:41.997 { 00:29:41.997 "name": "BaseBdev4", 00:29:41.997 "uuid": "9faf5241-3589-55b0-ab3f-325ebf3e734b", 00:29:41.997 "is_configured": true, 00:29:41.997 "data_offset": 2048, 00:29:41.997 "data_size": 63488 00:29:41.997 } 00:29:41.997 ] 00:29:41.997 }' 00:29:41.997 00:43:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:29:41.997 00:43:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:41.997 00:43:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:29:41.997 00:43:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:29:41.997 00:43:35 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.997 00:43:35 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:42.256 00:43:35 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:29:42.256 00:43:35 -- bdev/bdev_raid.sh@709 -- # killprocess 140944 00:29:42.256 00:43:35 -- common/autotest_common.sh@936 -- # '[' -z 140944 ']' 00:29:42.256 00:43:35 -- common/autotest_common.sh@940 -- # kill -0 140944 00:29:42.256 00:43:35 -- common/autotest_common.sh@941 -- # uname 00:29:42.256 00:43:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:42.256 00:43:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140944 00:29:42.256 00:43:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:42.256 00:43:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:42.256 00:43:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140944' 00:29:42.256 killing process with pid 140944 00:29:42.256 00:43:35 -- common/autotest_common.sh@955 -- # kill 140944 00:29:42.256 Received shutdown signal, test time was about 60.000000 seconds 00:29:42.256 00:29:42.256 Latency(us) 00:29:42.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.256 =================================================================================================================== 00:29:42.256 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:42.256 [2024-04-24 00:43:35.936162] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:42.256 [2024-04-24 00:43:35.936245] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:42.256 [2024-04-24 00:43:35.936344] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:42.256 [2024-04-24 00:43:35.936361] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:29:42.256 00:43:35 -- common/autotest_common.sh@960 -- # wait 140944 00:29:42.823 [2024-04-24 00:43:36.490040] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:44.198 ************************************ 00:29:44.198 END TEST raid5f_rebuild_test_sb 00:29:44.198 ************************************ 00:29:44.198 00:43:37 -- bdev/bdev_raid.sh@711 -- # return 0 00:29:44.198 00:29:44.198 real 0m32.858s 00:29:44.198 user 0m49.388s 00:29:44.198 sys 0m4.519s 00:29:44.198 00:43:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:44.198 00:43:37 -- common/autotest_common.sh@10 -- # set +x 00:29:44.457 00:43:37 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:29:44.457 00:29:44.457 real 13m30.910s 00:29:44.457 user 21m47.575s 00:29:44.457 sys 1m59.638s 00:29:44.457 00:43:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:44.457 00:43:37 -- common/autotest_common.sh@10 -- # set +x 00:29:44.457 ************************************ 00:29:44.457 END TEST bdev_raid 00:29:44.457 ************************************ 00:29:44.457 00:43:38 -- spdk/autotest.sh@187 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:29:44.457 00:43:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:44.457 00:43:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:44.457 00:43:38 -- common/autotest_common.sh@10 -- # set +x 00:29:44.457 ************************************ 00:29:44.457 START TEST bdevperf_config 00:29:44.457 ************************************ 00:29:44.458 00:43:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:29:44.458 * Looking for test storage... 00:29:44.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:29:44.458 00:43:38 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:29:44.458 00:43:38 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:29:44.458 00:43:38 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:29:44.458 00:43:38 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:44.458 00:43:38 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:44.458 00:43:38 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:29:44.458 00:43:38 -- bdevperf/common.sh@8 -- # local job_section=global 00:29:44.458 00:43:38 -- bdevperf/common.sh@9 -- # local rw=read 00:29:44.458 00:43:38 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:44.458 00:43:38 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:29:44.458 00:43:38 -- bdevperf/common.sh@13 -- # cat 00:29:44.458 00:43:38 -- bdevperf/common.sh@18 -- # job='[global]' 00:29:44.458 00:29:44.458 00:43:38 -- bdevperf/common.sh@19 -- # echo 00:29:44.458 00:43:38 -- bdevperf/common.sh@20 -- # cat 00:29:44.458 00:43:38 -- bdevperf/test_config.sh@18 -- # create_job job0 00:29:44.458 00:43:38 -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:44.458 00:43:38 -- bdevperf/common.sh@9 -- # local rw= 00:29:44.458 00:43:38 -- bdevperf/common.sh@10 -- # local filename= 00:29:44.458 00:43:38 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:44.458 00:43:38 -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:44.458 00:29:44.458 00:43:38 -- bdevperf/common.sh@19 -- # echo 00:29:44.458 00:43:38 -- bdevperf/common.sh@20 -- # cat 00:29:44.458 00:43:38 -- bdevperf/test_config.sh@19 -- # create_job job1 00:29:44.458 00:43:38 -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:44.458 00:43:38 -- bdevperf/common.sh@9 -- # local rw= 00:29:44.458 00:43:38 -- bdevperf/common.sh@10 -- # local filename= 00:29:44.458 00:43:38 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:44.458 00:43:38 -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:44.458 00:29:44.458 00:43:38 -- bdevperf/common.sh@19 -- # echo 00:29:44.458 00:43:38 -- bdevperf/common.sh@20 -- # cat 00:29:44.458 00:43:38 -- bdevperf/test_config.sh@20 -- # create_job job2 00:29:44.458 00:43:38 -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:44.458 00:43:38 -- bdevperf/common.sh@9 -- # local rw= 00:29:44.458 00:43:38 -- bdevperf/common.sh@10 -- # local filename= 00:29:44.458 00:43:38 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:44.458 00:43:38 -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:44.458 00:43:38 -- bdevperf/common.sh@19 -- # echo 00:29:44.458 00:29:44.458 00:43:38 -- bdevperf/common.sh@20 -- # cat 00:29:44.458 00:43:38 -- bdevperf/test_config.sh@21 -- # create_job job3 00:29:44.458 00:43:38 -- bdevperf/common.sh@8 -- # local job_section=job3 00:29:44.458 00:43:38 -- bdevperf/common.sh@9 -- # local rw= 00:29:44.458 00:43:38 -- bdevperf/common.sh@10 -- # local filename= 00:29:44.458 00:43:38 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:29:44.458 00:43:38 -- bdevperf/common.sh@18 -- # job='[job3]' 00:29:44.458 00:29:44.458 00:43:38 -- bdevperf/common.sh@19 -- # echo 00:29:44.458 00:43:38 -- bdevperf/common.sh@20 -- # cat 00:29:44.458 00:43:38 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:49.728 00:43:43 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-04-24 00:43:38.308293] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:29:49.728 [2024-04-24 00:43:38.308560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141749 ] 00:29:49.728 Using job config with 4 jobs 00:29:49.728 [2024-04-24 00:43:38.491904] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.728 [2024-04-24 00:43:38.818665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.728 cpumask for '\''job0'\'' is too big 00:29:49.728 cpumask for '\''job1'\'' is too big 00:29:49.728 cpumask for '\''job2'\'' is too big 00:29:49.728 cpumask for '\''job3'\'' is too big 00:29:49.728 Running I/O for 2 seconds... 00:29:49.728 00:29:49.728 Latency(us) 00:29:49.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.728 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:49.728 Malloc0 : 2.01 28714.58 28.04 0.00 0.00 8906.77 1622.80 13856.18 00:29:49.728 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:49.728 Malloc0 : 2.02 28693.46 28.02 0.00 0.00 8895.97 1614.99 12233.39 00:29:49.728 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:49.728 Malloc0 : 2.02 28671.52 28.00 0.00 0.00 8885.22 1591.59 10735.42 00:29:49.728 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:49.728 Malloc0 : 2.02 28649.85 27.98 0.00 0.00 8875.72 1583.79 10048.85 00:29:49.728 =================================================================================================================== 00:29:49.728 Total : 114729.41 112.04 0.00 0.00 8890.92 1583.79 13856.18' 00:29:49.728 00:43:43 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-04-24 00:43:38.308293] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:29:49.728 [2024-04-24 00:43:38.308560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141749 ] 00:29:49.728 Using job config with 4 jobs 00:29:49.728 [2024-04-24 00:43:38.491904] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.728 [2024-04-24 00:43:38.818665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.728 cpumask for '\''job0'\'' is too big 00:29:49.728 cpumask for '\''job1'\'' is too big 00:29:49.728 cpumask for '\''job2'\'' is too big 00:29:49.728 cpumask for '\''job3'\'' is too big 00:29:49.728 Running I/O for 2 seconds... 00:29:49.728 00:29:49.728 Latency(us) 00:29:49.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.728 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:49.728 Malloc0 : 2.01 28714.58 28.04 0.00 0.00 8906.77 1622.80 13856.18 00:29:49.728 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:49.728 Malloc0 : 2.02 28693.46 28.02 0.00 0.00 8895.97 1614.99 12233.39 00:29:49.728 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:49.728 Malloc0 : 2.02 28671.52 28.00 0.00 0.00 8885.22 1591.59 10735.42 00:29:49.728 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:49.728 Malloc0 : 2.02 28649.85 27.98 0.00 0.00 8875.72 1583.79 10048.85 00:29:49.728 =================================================================================================================== 00:29:49.728 Total : 114729.41 112.04 0.00 0.00 8890.92 1583.79 13856.18' 00:29:49.728 00:43:43 -- bdevperf/common.sh@32 -- # echo '[2024-04-24 00:43:38.308293] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:29:49.728 [2024-04-24 00:43:38.308560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141749 ] 00:29:49.728 Using job config with 4 jobs 00:29:49.728 [2024-04-24 00:43:38.491904] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.728 [2024-04-24 00:43:38.818665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.728 cpumask for '\''job0'\'' is too big 00:29:49.728 cpumask for '\''job1'\'' is too big 00:29:49.728 cpumask for '\''job2'\'' is too big 00:29:49.728 cpumask for '\''job3'\'' is too big 00:29:49.728 Running I/O for 2 seconds... 00:29:49.728 00:29:49.728 Latency(us) 00:29:49.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.729 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:49.729 Malloc0 : 2.01 28714.58 28.04 0.00 0.00 8906.77 1622.80 13856.18 00:29:49.729 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:49.729 Malloc0 : 2.02 28693.46 28.02 0.00 0.00 8895.97 1614.99 12233.39 00:29:49.729 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:49.729 Malloc0 : 2.02 28671.52 28.00 0.00 0.00 8885.22 1591.59 10735.42 00:29:49.729 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:49.729 Malloc0 : 2.02 28649.85 27.98 0.00 0.00 8875.72 1583.79 10048.85 00:29:49.729 =================================================================================================================== 00:29:49.729 Total : 114729.41 112.04 0.00 0.00 8890.92 1583.79 13856.18' 00:29:49.729 00:43:43 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:49.729 00:43:43 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:49.729 00:43:43 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:29:49.729 00:43:43 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:49.729 [2024-04-24 00:43:43.273154] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:29:49.729 [2024-04-24 00:43:43.273769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141810 ] 00:29:49.729 [2024-04-24 00:43:43.438137] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.987 [2024-04-24 00:43:43.712429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.554 cpumask for 'job0' is too big 00:29:50.554 cpumask for 'job1' is too big 00:29:50.554 cpumask for 'job2' is too big 00:29:50.554 cpumask for 'job3' is too big 00:29:54.737 00:43:47 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:29:54.737 Running I/O for 2 seconds... 00:29:54.737 00:29:54.737 Latency(us) 00:29:54.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:54.737 Malloc0 : 2.01 29918.43 29.22 0.00 0.00 8550.16 1591.59 13544.11 00:29:54.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:54.737 Malloc0 : 2.02 29920.42 29.22 0.00 0.00 8532.25 1552.58 11921.31 00:29:54.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:54.737 Malloc0 : 2.02 29896.85 29.20 0.00 0.00 8523.80 1568.18 10485.76 00:29:54.737 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:29:54.737 Malloc0 : 2.02 29875.49 29.18 0.00 0.00 8514.46 1614.99 9424.70 00:29:54.737 =================================================================================================================== 00:29:54.737 Total : 119611.18 116.81 0.00 0.00 8530.14 1552.58 13544.11' 00:29:54.737 00:43:47 -- bdevperf/test_config.sh@27 -- # cleanup 00:29:54.737 00:43:47 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:54.737 00:29:54.737 00:43:47 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:29:54.737 00:43:47 -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:54.737 00:43:47 -- bdevperf/common.sh@9 -- # local rw=write 00:29:54.737 00:43:47 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:54.737 00:43:47 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:54.737 00:43:47 -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:54.737 00:43:47 -- bdevperf/common.sh@19 -- # echo 00:29:54.737 00:43:47 -- bdevperf/common.sh@20 -- # cat 00:29:54.737 00:43:47 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:29:54.737 00:43:47 -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:54.737 00:43:47 -- bdevperf/common.sh@9 -- # local rw=write 00:29:54.737 00:43:47 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:54.737 00:43:47 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:54.737 00:43:47 -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:54.737 00:29:54.737 00:43:47 -- bdevperf/common.sh@19 -- # echo 00:29:54.737 00:43:47 -- bdevperf/common.sh@20 -- # cat 00:29:54.737 00:43:47 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:29:54.737 00:43:47 -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:54.737 00:43:47 -- bdevperf/common.sh@9 -- # local rw=write 00:29:54.737 00:43:47 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:29:54.737 00:43:47 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:54.737 00:43:47 -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:54.737 00:29:54.737 00:43:47 -- bdevperf/common.sh@19 -- # echo 00:29:54.737 00:43:47 -- bdevperf/common.sh@20 -- # cat 00:29:54.737 00:43:47 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:58.929 00:43:52 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-04-24 00:43:48.054633] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:29:58.929 [2024-04-24 00:43:48.054823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141868 ] 00:29:58.929 Using job config with 3 jobs 00:29:58.929 [2024-04-24 00:43:48.238762] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.929 [2024-04-24 00:43:48.477926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.929 cpumask for '\''job0'\'' is too big 00:29:58.929 cpumask for '\''job1'\'' is too big 00:29:58.929 cpumask for '\''job2'\'' is too big 00:29:58.929 Running I/O for 2 seconds... 00:29:58.929 00:29:58.929 Latency(us) 00:29:58.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.929 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:58.929 Malloc0 : 2.01 40883.46 39.93 0.00 0.00 6255.21 1568.18 9299.87 00:29:58.929 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:58.929 Malloc0 : 2.01 40856.30 39.90 0.00 0.00 6248.66 1443.35 9175.04 00:29:58.929 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:58.929 Malloc0 : 2.01 40828.29 39.87 0.00 0.00 6242.29 1365.33 8862.96 00:29:58.929 =================================================================================================================== 00:29:58.929 Total : 122568.06 119.70 0.00 0.00 6248.72 1365.33 9299.87' 00:29:58.929 00:43:52 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-04-24 00:43:48.054633] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:29:58.929 [2024-04-24 00:43:48.054823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141868 ] 00:29:58.929 Using job config with 3 jobs 00:29:58.929 [2024-04-24 00:43:48.238762] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.929 [2024-04-24 00:43:48.477926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.929 cpumask for '\''job0'\'' is too big 00:29:58.929 cpumask for '\''job1'\'' is too big 00:29:58.929 cpumask for '\''job2'\'' is too big 00:29:58.929 Running I/O for 2 seconds... 00:29:58.929 00:29:58.929 Latency(us) 00:29:58.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.929 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:58.929 Malloc0 : 2.01 40883.46 39.93 0.00 0.00 6255.21 1568.18 9299.87 00:29:58.929 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:58.929 Malloc0 : 2.01 40856.30 39.90 0.00 0.00 6248.66 1443.35 9175.04 00:29:58.929 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:58.929 Malloc0 : 2.01 40828.29 39.87 0.00 0.00 6242.29 1365.33 8862.96 00:29:58.929 =================================================================================================================== 00:29:58.929 Total : 122568.06 119.70 0.00 0.00 6248.72 1365.33 9299.87' 00:29:58.929 00:43:52 -- bdevperf/common.sh@32 -- # echo '[2024-04-24 00:43:48.054633] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:29:58.929 [2024-04-24 00:43:48.054823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141868 ] 00:29:58.929 Using job config with 3 jobs 00:29:58.929 [2024-04-24 00:43:48.238762] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.929 [2024-04-24 00:43:48.477926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.929 cpumask for '\''job0'\'' is too big 00:29:58.929 cpumask for '\''job1'\'' is too big 00:29:58.929 cpumask for '\''job2'\'' is too big 00:29:58.929 Running I/O for 2 seconds... 00:29:58.929 00:29:58.929 Latency(us) 00:29:58.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.929 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:58.929 Malloc0 : 2.01 40883.46 39.93 0.00 0.00 6255.21 1568.18 9299.87 00:29:58.929 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:58.929 Malloc0 : 2.01 40856.30 39.90 0.00 0.00 6248.66 1443.35 9175.04 00:29:58.929 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:29:58.929 Malloc0 : 2.01 40828.29 39.87 0.00 0.00 6242.29 1365.33 8862.96 00:29:58.929 =================================================================================================================== 00:29:58.929 Total : 122568.06 119.70 0.00 0.00 6248.72 1365.33 9299.87' 00:29:58.929 00:43:52 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:29:58.929 00:43:52 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:29:58.929 00:43:52 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:29:58.929 00:43:52 -- bdevperf/test_config.sh@35 -- # cleanup 00:29:58.929 00:43:52 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:29:58.930 00:43:52 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:29:58.930 00:43:52 -- bdevperf/common.sh@8 -- # local job_section=global 00:29:58.930 00:43:52 -- bdevperf/common.sh@9 -- # local rw=rw 00:29:58.930 00:43:52 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:29:58.930 00:43:52 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:29:58.930 00:43:52 -- bdevperf/common.sh@13 -- # cat 00:29:58.930 00:43:52 -- bdevperf/common.sh@18 -- # job='[global]' 00:29:58.930 00:29:58.930 00:43:52 -- bdevperf/common.sh@19 -- # echo 00:29:58.930 00:43:52 -- bdevperf/common.sh@20 -- # cat 00:29:58.930 00:43:52 -- bdevperf/test_config.sh@38 -- # create_job job0 00:29:58.930 00:43:52 -- bdevperf/common.sh@8 -- # local job_section=job0 00:29:58.930 00:43:52 -- bdevperf/common.sh@9 -- # local rw= 00:29:58.930 00:43:52 -- bdevperf/common.sh@10 -- # local filename= 00:29:58.930 00:43:52 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:29:58.930 00:43:52 -- bdevperf/common.sh@18 -- # job='[job0]' 00:29:58.930 00:43:52 -- bdevperf/common.sh@19 -- # echo 00:29:58.930 00:29:58.930 00:43:52 -- bdevperf/common.sh@20 -- # cat 00:29:58.930 00:43:52 -- bdevperf/test_config.sh@39 -- # create_job job1 00:29:58.930 00:43:52 -- bdevperf/common.sh@8 -- # local job_section=job1 00:29:58.930 00:43:52 -- bdevperf/common.sh@9 -- # local rw= 00:29:58.930 00:43:52 -- bdevperf/common.sh@10 -- # local filename= 00:29:58.930 00:43:52 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:29:58.930 00:29:58.930 00:43:52 -- bdevperf/common.sh@18 -- # job='[job1]' 00:29:58.930 00:43:52 -- bdevperf/common.sh@19 -- # echo 00:29:58.930 00:43:52 -- bdevperf/common.sh@20 -- # cat 00:29:58.930 00:43:52 -- bdevperf/test_config.sh@40 -- # create_job job2 00:29:58.930 00:43:52 -- bdevperf/common.sh@8 -- # local job_section=job2 00:29:58.930 00:43:52 -- bdevperf/common.sh@9 -- # local rw= 00:29:58.930 00:43:52 -- bdevperf/common.sh@10 -- # local filename= 00:29:58.930 00:43:52 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:29:58.930 00:43:52 -- bdevperf/common.sh@18 -- # job='[job2]' 00:29:58.930 00:29:58.930 00:43:52 -- bdevperf/common.sh@19 -- # echo 00:29:58.930 00:43:52 -- bdevperf/common.sh@20 -- # cat 00:29:58.930 00:43:52 -- bdevperf/test_config.sh@41 -- # create_job job3 00:29:58.930 00:43:52 -- bdevperf/common.sh@8 -- # local job_section=job3 00:29:58.930 00:43:52 -- bdevperf/common.sh@9 -- # local rw= 00:29:58.930 00:43:52 -- bdevperf/common.sh@10 -- # local filename= 00:29:58.930 00:43:52 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:29:58.930 00:43:52 -- bdevperf/common.sh@18 -- # job='[job3]' 00:29:58.930 00:43:52 -- bdevperf/common.sh@19 -- # echo 00:29:58.930 00:29:58.930 00:43:52 -- bdevperf/common.sh@20 -- # cat 00:29:58.930 00:43:52 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:30:04.197 00:43:57 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-04-24 00:43:52.764927] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:04.197 [2024-04-24 00:43:52.765111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141938 ] 00:30:04.197 Using job config with 4 jobs 00:30:04.197 [2024-04-24 00:43:52.947745] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.197 [2024-04-24 00:43:53.175873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.197 cpumask for '\''job0'\'' is too big 00:30:04.197 cpumask for '\''job1'\'' is too big 00:30:04.197 cpumask for '\''job2'\'' is too big 00:30:04.197 cpumask for '\''job3'\'' is too big 00:30:04.197 Running I/O for 2 seconds... 00:30:04.197 00:30:04.197 Latency(us) 00:30:04.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.197 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc0 : 2.02 15180.96 14.83 0.00 0.00 16853.55 2949.12 25215.76 00:30:04.197 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc1 : 2.03 15168.86 14.81 0.00 0.00 16850.41 3557.67 25090.93 00:30:04.197 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc0 : 2.03 15157.59 14.80 0.00 0.00 16820.18 2949.12 22344.66 00:30:04.197 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc1 : 2.03 15145.73 14.79 0.00 0.00 16820.31 3479.65 22344.66 00:30:04.197 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc0 : 2.04 15194.25 14.84 0.00 0.00 16723.62 2949.12 19223.89 00:30:04.197 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc1 : 2.04 15182.76 14.83 0.00 0.00 16722.67 3526.46 19848.05 00:30:04.197 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc0 : 2.04 15171.75 14.82 0.00 0.00 16688.97 2902.31 19848.05 00:30:04.197 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc1 : 2.04 15160.67 14.81 0.00 0.00 16689.04 3526.46 19723.22 00:30:04.197 =================================================================================================================== 00:30:04.197 Total : 121362.57 118.52 0.00 0.00 16770.82 2902.31 25215.76' 00:30:04.197 00:43:57 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-04-24 00:43:52.764927] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:04.197 [2024-04-24 00:43:52.765111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141938 ] 00:30:04.197 Using job config with 4 jobs 00:30:04.197 [2024-04-24 00:43:52.947745] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.197 [2024-04-24 00:43:53.175873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.197 cpumask for '\''job0'\'' is too big 00:30:04.197 cpumask for '\''job1'\'' is too big 00:30:04.197 cpumask for '\''job2'\'' is too big 00:30:04.197 cpumask for '\''job3'\'' is too big 00:30:04.197 Running I/O for 2 seconds... 00:30:04.197 00:30:04.197 Latency(us) 00:30:04.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.197 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc0 : 2.02 15180.96 14.83 0.00 0.00 16853.55 2949.12 25215.76 00:30:04.197 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc1 : 2.03 15168.86 14.81 0.00 0.00 16850.41 3557.67 25090.93 00:30:04.197 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc0 : 2.03 15157.59 14.80 0.00 0.00 16820.18 2949.12 22344.66 00:30:04.197 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc1 : 2.03 15145.73 14.79 0.00 0.00 16820.31 3479.65 22344.66 00:30:04.197 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc0 : 2.04 15194.25 14.84 0.00 0.00 16723.62 2949.12 19223.89 00:30:04.197 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc1 : 2.04 15182.76 14.83 0.00 0.00 16722.67 3526.46 19848.05 00:30:04.197 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.197 Malloc0 : 2.04 15171.75 14.82 0.00 0.00 16688.97 2902.31 19848.05 00:30:04.197 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.198 Malloc1 : 2.04 15160.67 14.81 0.00 0.00 16689.04 3526.46 19723.22 00:30:04.198 =================================================================================================================== 00:30:04.198 Total : 121362.57 118.52 0.00 0.00 16770.82 2902.31 25215.76' 00:30:04.198 00:43:57 -- bdevperf/common.sh@32 -- # echo '[2024-04-24 00:43:52.764927] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:04.198 [2024-04-24 00:43:52.765111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141938 ] 00:30:04.198 Using job config with 4 jobs 00:30:04.198 [2024-04-24 00:43:52.947745] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.198 [2024-04-24 00:43:53.175873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.198 cpumask for '\''job0'\'' is too big 00:30:04.198 cpumask for '\''job1'\'' is too big 00:30:04.198 cpumask for '\''job2'\'' is too big 00:30:04.198 cpumask for '\''job3'\'' is too big 00:30:04.198 Running I/O for 2 seconds... 00:30:04.198 00:30:04.198 Latency(us) 00:30:04.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.198 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.198 Malloc0 : 2.02 15180.96 14.83 0.00 0.00 16853.55 2949.12 25215.76 00:30:04.198 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.198 Malloc1 : 2.03 15168.86 14.81 0.00 0.00 16850.41 3557.67 25090.93 00:30:04.198 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.198 Malloc0 : 2.03 15157.59 14.80 0.00 0.00 16820.18 2949.12 22344.66 00:30:04.198 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.198 Malloc1 : 2.03 15145.73 14.79 0.00 0.00 16820.31 3479.65 22344.66 00:30:04.198 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.198 Malloc0 : 2.04 15194.25 14.84 0.00 0.00 16723.62 2949.12 19223.89 00:30:04.198 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.198 Malloc1 : 2.04 15182.76 14.83 0.00 0.00 16722.67 3526.46 19848.05 00:30:04.198 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.198 Malloc0 : 2.04 15171.75 14.82 0.00 0.00 16688.97 2902.31 19848.05 00:30:04.198 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:30:04.198 Malloc1 : 2.04 15160.67 14.81 0.00 0.00 16689.04 3526.46 19723.22 00:30:04.198 =================================================================================================================== 00:30:04.198 Total : 121362.57 118.52 0.00 0.00 16770.82 2902.31 25215.76' 00:30:04.198 00:43:57 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:30:04.198 00:43:57 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:30:04.198 00:43:57 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:30:04.198 00:43:57 -- bdevperf/test_config.sh@44 -- # cleanup 00:30:04.198 00:43:57 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:30:04.198 00:43:57 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:30:04.198 00:30:04.198 real 0m19.385s 00:30:04.198 user 0m17.554s 00:30:04.198 sys 0m1.238s 00:30:04.198 00:43:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:04.198 00:43:57 -- common/autotest_common.sh@10 -- # set +x 00:30:04.198 ************************************ 00:30:04.198 END TEST bdevperf_config 00:30:04.198 ************************************ 00:30:04.198 00:43:57 -- spdk/autotest.sh@188 -- # uname -s 00:30:04.198 00:43:57 -- spdk/autotest.sh@188 -- # [[ Linux == Linux ]] 00:30:04.198 00:43:57 -- spdk/autotest.sh@189 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:30:04.198 00:43:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:04.198 00:43:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:04.198 00:43:57 -- common/autotest_common.sh@10 -- # set +x 00:30:04.198 ************************************ 00:30:04.198 START TEST reactor_set_interrupt 00:30:04.198 ************************************ 00:30:04.198 00:43:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:30:04.198 * Looking for test storage... 00:30:04.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:30:04.198 00:43:57 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:30:04.198 00:43:57 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:30:04.198 00:43:57 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:30:04.198 00:43:57 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:30:04.198 00:43:57 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:30:04.198 00:43:57 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:04.198 00:43:57 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:30:04.198 00:43:57 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:30:04.198 00:43:57 -- common/autotest_common.sh@34 -- # set -e 00:30:04.198 00:43:57 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:30:04.198 00:43:57 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:30:04.198 00:43:57 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:30:04.198 00:43:57 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:30:04.198 00:43:57 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:30:04.198 00:43:57 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:30:04.198 00:43:57 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:30:04.198 00:43:57 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:30:04.198 00:43:57 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:30:04.198 00:43:57 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:30:04.198 00:43:57 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:30:04.198 00:43:57 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:30:04.198 00:43:57 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:30:04.198 00:43:57 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:30:04.198 00:43:57 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:30:04.198 00:43:57 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:30:04.198 00:43:57 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:30:04.198 00:43:57 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:30:04.198 00:43:57 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:30:04.198 00:43:57 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:30:04.198 00:43:57 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:30:04.198 00:43:57 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:30:04.198 00:43:57 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:30:04.198 00:43:57 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:30:04.198 00:43:57 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:30:04.198 00:43:57 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:30:04.198 00:43:57 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:30:04.198 00:43:57 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:30:04.198 00:43:57 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:30:04.198 00:43:57 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:30:04.198 00:43:57 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:30:04.198 00:43:57 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:30:04.198 00:43:57 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:30:04.198 00:43:57 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:30:04.198 00:43:57 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:30:04.198 00:43:57 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:30:04.198 00:43:57 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:30:04.198 00:43:57 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:30:04.198 00:43:57 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:30:04.198 00:43:57 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:30:04.198 00:43:57 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:30:04.198 00:43:57 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:30:04.198 00:43:57 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:30:04.198 00:43:57 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:30:04.198 00:43:57 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:30:04.198 00:43:57 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:30:04.198 00:43:57 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:30:04.198 00:43:57 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:30:04.198 00:43:57 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:30:04.198 00:43:57 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:30:04.198 00:43:57 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:30:04.199 00:43:57 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:30:04.199 00:43:57 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:30:04.199 00:43:57 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:30:04.199 00:43:57 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:30:04.199 00:43:57 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:30:04.199 00:43:57 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:30:04.199 00:43:57 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:30:04.199 00:43:57 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:30:04.199 00:43:57 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:30:04.199 00:43:57 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:30:04.199 00:43:57 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:30:04.199 00:43:57 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:30:04.199 00:43:57 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:30:04.199 00:43:57 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:30:04.199 00:43:57 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:30:04.199 00:43:57 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:30:04.199 00:43:57 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:30:04.199 00:43:57 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:30:04.199 00:43:57 -- common/build_config.sh@65 -- # CONFIG_SHARED=n 00:30:04.199 00:43:57 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=y 00:30:04.199 00:43:57 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:30:04.199 00:43:57 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:30:04.199 00:43:57 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:30:04.199 00:43:57 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:30:04.199 00:43:57 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:30:04.199 00:43:57 -- common/build_config.sh@72 -- # CONFIG_RAID5F=y 00:30:04.199 00:43:57 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:30:04.199 00:43:57 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:30:04.199 00:43:57 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:30:04.199 00:43:57 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:30:04.199 00:43:57 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:30:04.199 00:43:57 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:30:04.199 00:43:57 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:30:04.199 00:43:57 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:30:04.199 00:43:57 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:30:04.199 00:43:57 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:30:04.199 00:43:57 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:30:04.199 00:43:57 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:30:04.199 00:43:57 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:30:04.199 00:43:57 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:30:04.199 00:43:57 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:30:04.199 00:43:57 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:30:04.199 00:43:57 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:30:04.199 00:43:57 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:30:04.199 00:43:57 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:30:04.199 00:43:57 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:30:04.199 00:43:57 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:30:04.199 00:43:57 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:30:04.199 00:43:57 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:30:04.199 00:43:57 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:30:04.199 00:43:57 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:30:04.199 00:43:57 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:30:04.199 #define SPDK_CONFIG_H 00:30:04.199 #define SPDK_CONFIG_APPS 1 00:30:04.199 #define SPDK_CONFIG_ARCH native 00:30:04.199 #define SPDK_CONFIG_ASAN 1 00:30:04.199 #undef SPDK_CONFIG_AVAHI 00:30:04.199 #undef SPDK_CONFIG_CET 00:30:04.199 #define SPDK_CONFIG_COVERAGE 1 00:30:04.199 #define SPDK_CONFIG_CROSS_PREFIX 00:30:04.199 #undef SPDK_CONFIG_CRYPTO 00:30:04.199 #undef SPDK_CONFIG_CRYPTO_MLX5 00:30:04.199 #undef SPDK_CONFIG_CUSTOMOCF 00:30:04.199 #undef SPDK_CONFIG_DAOS 00:30:04.199 #define SPDK_CONFIG_DAOS_DIR 00:30:04.199 #define SPDK_CONFIG_DEBUG 1 00:30:04.199 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:30:04.199 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:30:04.199 #define SPDK_CONFIG_DPDK_INC_DIR 00:30:04.199 #define SPDK_CONFIG_DPDK_LIB_DIR 00:30:04.199 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:30:04.199 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:30:04.199 #define SPDK_CONFIG_EXAMPLES 1 00:30:04.199 #undef SPDK_CONFIG_FC 00:30:04.199 #define SPDK_CONFIG_FC_PATH 00:30:04.199 #define SPDK_CONFIG_FIO_PLUGIN 1 00:30:04.199 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:30:04.199 #undef SPDK_CONFIG_FUSE 00:30:04.199 #undef SPDK_CONFIG_FUZZER 00:30:04.199 #define SPDK_CONFIG_FUZZER_LIB 00:30:04.199 #undef SPDK_CONFIG_GOLANG 00:30:04.199 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:30:04.199 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:30:04.199 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:30:04.199 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:30:04.199 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:30:04.199 #undef SPDK_CONFIG_HAVE_LIBBSD 00:30:04.199 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:30:04.199 #define SPDK_CONFIG_IDXD 1 00:30:04.199 #undef SPDK_CONFIG_IDXD_KERNEL 00:30:04.199 #undef SPDK_CONFIG_IPSEC_MB 00:30:04.199 #define SPDK_CONFIG_IPSEC_MB_DIR 00:30:04.199 #define SPDK_CONFIG_ISAL 1 00:30:04.199 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:30:04.199 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:30:04.199 #define SPDK_CONFIG_LIBDIR 00:30:04.199 #undef SPDK_CONFIG_LTO 00:30:04.199 #define SPDK_CONFIG_MAX_LCORES 00:30:04.199 #define SPDK_CONFIG_NVME_CUSE 1 00:30:04.199 #undef SPDK_CONFIG_OCF 00:30:04.199 #define SPDK_CONFIG_OCF_PATH 00:30:04.199 #define SPDK_CONFIG_OPENSSL_PATH 00:30:04.199 #undef SPDK_CONFIG_PGO_CAPTURE 00:30:04.199 #define SPDK_CONFIG_PGO_DIR 00:30:04.199 #undef SPDK_CONFIG_PGO_USE 00:30:04.199 #define SPDK_CONFIG_PREFIX /usr/local 00:30:04.199 #define SPDK_CONFIG_RAID5F 1 00:30:04.199 #undef SPDK_CONFIG_RBD 00:30:04.199 #define SPDK_CONFIG_RDMA 1 00:30:04.199 #define SPDK_CONFIG_RDMA_PROV verbs 00:30:04.199 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:30:04.199 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:30:04.199 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:30:04.199 #undef SPDK_CONFIG_SHARED 00:30:04.199 #undef SPDK_CONFIG_SMA 00:30:04.199 #define SPDK_CONFIG_TESTS 1 00:30:04.199 #undef SPDK_CONFIG_TSAN 00:30:04.199 #undef SPDK_CONFIG_UBLK 00:30:04.199 #define SPDK_CONFIG_UBSAN 1 00:30:04.199 #define SPDK_CONFIG_UNIT_TESTS 1 00:30:04.199 #undef SPDK_CONFIG_URING 00:30:04.199 #define SPDK_CONFIG_URING_PATH 00:30:04.199 #undef SPDK_CONFIG_URING_ZNS 00:30:04.199 #undef SPDK_CONFIG_USDT 00:30:04.199 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:30:04.199 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:30:04.199 #undef SPDK_CONFIG_VFIO_USER 00:30:04.199 #define SPDK_CONFIG_VFIO_USER_DIR 00:30:04.199 #define SPDK_CONFIG_VHOST 1 00:30:04.199 #define SPDK_CONFIG_VIRTIO 1 00:30:04.199 #undef SPDK_CONFIG_VTUNE 00:30:04.199 #define SPDK_CONFIG_VTUNE_DIR 00:30:04.199 #define SPDK_CONFIG_WERROR 1 00:30:04.199 #define SPDK_CONFIG_WPDK_DIR 00:30:04.199 #undef SPDK_CONFIG_XNVME 00:30:04.199 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:30:04.199 00:43:57 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:30:04.199 00:43:57 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:04.199 00:43:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.199 00:43:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.199 00:43:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.199 00:43:57 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:04.199 00:43:57 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:04.199 00:43:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:04.199 00:43:57 -- paths/export.sh@5 -- # export PATH 00:30:04.199 00:43:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:04.199 00:43:57 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:30:04.199 00:43:57 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:30:04.199 00:43:57 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:30:04.199 00:43:57 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:30:04.199 00:43:57 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:30:04.199 00:43:57 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:30:04.199 00:43:57 -- pm/common@67 -- # TEST_TAG=N/A 00:30:04.199 00:43:57 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:30:04.199 00:43:57 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:30:04.199 00:43:57 -- pm/common@71 -- # uname -s 00:30:04.200 00:43:57 -- pm/common@71 -- # PM_OS=Linux 00:30:04.200 00:43:57 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:30:04.200 00:43:57 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:30:04.200 00:43:57 -- pm/common@76 -- # [[ Linux == Linux ]] 00:30:04.200 00:43:57 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:30:04.200 00:43:57 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:30:04.200 00:43:57 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:30:04.200 00:43:57 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:30:04.200 00:43:57 -- common/autotest_common.sh@57 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:30:04.200 00:43:57 -- common/autotest_common.sh@61 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:30:04.200 00:43:57 -- common/autotest_common.sh@63 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:30:04.200 00:43:57 -- common/autotest_common.sh@65 -- # : 1 00:30:04.200 00:43:57 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:30:04.200 00:43:57 -- common/autotest_common.sh@67 -- # : 1 00:30:04.200 00:43:57 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:30:04.200 00:43:57 -- common/autotest_common.sh@69 -- # : 00:30:04.200 00:43:57 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:30:04.200 00:43:57 -- common/autotest_common.sh@71 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:30:04.200 00:43:57 -- common/autotest_common.sh@73 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:30:04.200 00:43:57 -- common/autotest_common.sh@75 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:30:04.200 00:43:57 -- common/autotest_common.sh@77 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:30:04.200 00:43:57 -- common/autotest_common.sh@79 -- # : 1 00:30:04.200 00:43:57 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:30:04.200 00:43:57 -- common/autotest_common.sh@81 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:30:04.200 00:43:57 -- common/autotest_common.sh@83 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:30:04.200 00:43:57 -- common/autotest_common.sh@85 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:30:04.200 00:43:57 -- common/autotest_common.sh@87 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:30:04.200 00:43:57 -- common/autotest_common.sh@89 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:30:04.200 00:43:57 -- common/autotest_common.sh@91 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:30:04.200 00:43:57 -- common/autotest_common.sh@93 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:30:04.200 00:43:57 -- common/autotest_common.sh@95 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:30:04.200 00:43:57 -- common/autotest_common.sh@97 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:30:04.200 00:43:57 -- common/autotest_common.sh@99 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:30:04.200 00:43:57 -- common/autotest_common.sh@101 -- # : rdma 00:30:04.200 00:43:57 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:30:04.200 00:43:57 -- common/autotest_common.sh@103 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:30:04.200 00:43:57 -- common/autotest_common.sh@105 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:30:04.200 00:43:57 -- common/autotest_common.sh@107 -- # : 1 00:30:04.200 00:43:57 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:30:04.200 00:43:57 -- common/autotest_common.sh@109 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:30:04.200 00:43:57 -- common/autotest_common.sh@111 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:30:04.200 00:43:57 -- common/autotest_common.sh@113 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:30:04.200 00:43:57 -- common/autotest_common.sh@115 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:30:04.200 00:43:57 -- common/autotest_common.sh@117 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:30:04.200 00:43:57 -- common/autotest_common.sh@119 -- # : 1 00:30:04.200 00:43:57 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:30:04.200 00:43:57 -- common/autotest_common.sh@121 -- # : 1 00:30:04.200 00:43:57 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:30:04.200 00:43:57 -- common/autotest_common.sh@123 -- # : 00:30:04.200 00:43:57 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:30:04.200 00:43:57 -- common/autotest_common.sh@125 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:30:04.200 00:43:57 -- common/autotest_common.sh@127 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:30:04.200 00:43:57 -- common/autotest_common.sh@129 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:30:04.200 00:43:57 -- common/autotest_common.sh@131 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:30:04.200 00:43:57 -- common/autotest_common.sh@133 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:30:04.200 00:43:57 -- common/autotest_common.sh@135 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:30:04.200 00:43:57 -- common/autotest_common.sh@137 -- # : 00:30:04.200 00:43:57 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:30:04.200 00:43:57 -- common/autotest_common.sh@139 -- # : true 00:30:04.200 00:43:57 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:30:04.200 00:43:57 -- common/autotest_common.sh@141 -- # : 1 00:30:04.200 00:43:57 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:30:04.200 00:43:57 -- common/autotest_common.sh@143 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:30:04.200 00:43:57 -- common/autotest_common.sh@145 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:30:04.200 00:43:57 -- common/autotest_common.sh@147 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:30:04.200 00:43:57 -- common/autotest_common.sh@149 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:30:04.200 00:43:57 -- common/autotest_common.sh@151 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:30:04.200 00:43:57 -- common/autotest_common.sh@153 -- # : 00:30:04.200 00:43:57 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:30:04.200 00:43:57 -- common/autotest_common.sh@155 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:30:04.200 00:43:57 -- common/autotest_common.sh@157 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:30:04.200 00:43:57 -- common/autotest_common.sh@159 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:30:04.200 00:43:57 -- common/autotest_common.sh@161 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:30:04.200 00:43:57 -- common/autotest_common.sh@163 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:30:04.200 00:43:57 -- common/autotest_common.sh@166 -- # : 00:30:04.200 00:43:57 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:30:04.200 00:43:57 -- common/autotest_common.sh@168 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:30:04.200 00:43:57 -- common/autotest_common.sh@170 -- # : 0 00:30:04.200 00:43:57 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:30:04.200 00:43:57 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:30:04.200 00:43:57 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:30:04.200 00:43:57 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:30:04.200 00:43:57 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:30:04.200 00:43:57 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:04.200 00:43:57 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:04.200 00:43:57 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:04.200 00:43:57 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:04.200 00:43:57 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:30:04.200 00:43:57 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:30:04.200 00:43:57 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:30:04.200 00:43:57 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:30:04.200 00:43:57 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:30:04.201 00:43:57 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:30:04.201 00:43:57 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:04.201 00:43:57 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:04.201 00:43:57 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:04.201 00:43:57 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:04.201 00:43:57 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:30:04.201 00:43:57 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:30:04.201 00:43:57 -- common/autotest_common.sh@199 -- # cat 00:30:04.201 00:43:57 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:30:04.201 00:43:57 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:04.201 00:43:57 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:04.201 00:43:57 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:04.201 00:43:57 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:04.201 00:43:57 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:30:04.201 00:43:57 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:30:04.201 00:43:57 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:30:04.201 00:43:57 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:30:04.201 00:43:57 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:30:04.201 00:43:57 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:30:04.201 00:43:57 -- common/autotest_common.sh@242 -- # export QEMU_BIN= 00:30:04.201 00:43:57 -- common/autotest_common.sh@242 -- # QEMU_BIN= 00:30:04.201 00:43:57 -- common/autotest_common.sh@243 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:30:04.201 00:43:57 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:30:04.201 00:43:57 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:30:04.201 00:43:57 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:30:04.201 00:43:57 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:04.201 00:43:57 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:04.201 00:43:57 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:30:04.201 00:43:57 -- common/autotest_common.sh@252 -- # export valgrind= 00:30:04.201 00:43:57 -- common/autotest_common.sh@252 -- # valgrind= 00:30:04.201 00:43:57 -- common/autotest_common.sh@258 -- # uname -s 00:30:04.201 00:43:57 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:30:04.201 00:43:57 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:30:04.201 00:43:57 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:30:04.201 00:43:57 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:30:04.201 00:43:57 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:30:04.201 00:43:57 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:30:04.201 00:43:57 -- common/autotest_common.sh@268 -- # MAKE=make 00:30:04.201 00:43:57 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:30:04.201 00:43:57 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:30:04.201 00:43:57 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:30:04.201 00:43:57 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:30:04.201 00:43:57 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:30:04.201 00:43:57 -- common/autotest_common.sh@307 -- # [[ -z 142047 ]] 00:30:04.201 00:43:57 -- common/autotest_common.sh@307 -- # kill -0 142047 00:30:04.201 00:43:57 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:30:04.201 00:43:57 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:30:04.201 00:43:57 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:30:04.201 00:43:57 -- common/autotest_common.sh@320 -- # local mount target_dir 00:30:04.201 00:43:57 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:30:04.201 00:43:57 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:30:04.201 00:43:57 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:30:04.201 00:43:57 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:30:04.201 00:43:57 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.64DAbj 00:30:04.201 00:43:57 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:30:04.201 00:43:57 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:30:04.201 00:43:57 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:30:04.201 00:43:57 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.64DAbj/tests/interrupt /tmp/spdk.64DAbj 00:30:04.201 00:43:57 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:30:04.201 00:43:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:04.201 00:43:57 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:30:04.201 00:43:57 -- common/autotest_common.sh@316 -- # df -T 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=1248956416 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253683200 00:30:04.201 00:43:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=4726784 00:30:04.201 00:43:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda1 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=10372780032 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20616794112 00:30:04.201 00:43:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=10227236864 00:30:04.201 00:43:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=6263693312 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6268403712 00:30:04.201 00:43:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=4710400 00:30:04.201 00:43:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=5242880 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5242880 00:30:04.201 00:43:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:30:04.201 00:43:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda15 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=103061504 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=109395968 00:30:04.201 00:43:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=6334464 00:30:04.201 00:43:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253675008 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253679104 00:30:04.201 00:43:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:30:04.201 00:43:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:30:04.201 00:43:57 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # avails["$mount"]=94174060544 00:30:04.201 00:43:57 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:30:04.201 00:43:57 -- common/autotest_common.sh@352 -- # uses["$mount"]=5528719360 00:30:04.201 00:43:57 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:04.201 00:43:57 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:30:04.201 * Looking for test storage... 00:30:04.201 00:43:57 -- common/autotest_common.sh@357 -- # local target_space new_size 00:30:04.201 00:43:57 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:30:04.201 00:43:57 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:30:04.201 00:43:57 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:30:04.201 00:43:57 -- common/autotest_common.sh@361 -- # mount=/ 00:30:04.201 00:43:57 -- common/autotest_common.sh@363 -- # target_space=10372780032 00:30:04.201 00:43:57 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:30:04.201 00:43:57 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:30:04.201 00:43:57 -- common/autotest_common.sh@369 -- # [[ ext4 == tmpfs ]] 00:30:04.201 00:43:57 -- common/autotest_common.sh@369 -- # [[ ext4 == ramfs ]] 00:30:04.201 00:43:57 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:30:04.201 00:43:57 -- common/autotest_common.sh@370 -- # new_size=12441829376 00:30:04.201 00:43:57 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:30:04.201 00:43:57 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:30:04.201 00:43:57 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:30:04.201 00:43:57 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:30:04.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:30:04.202 00:43:57 -- common/autotest_common.sh@378 -- # return 0 00:30:04.202 00:43:57 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:30:04.202 00:43:57 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:30:04.202 00:43:57 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:30:04.202 00:43:57 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:30:04.202 00:43:57 -- common/autotest_common.sh@1673 -- # true 00:30:04.202 00:43:57 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:30:04.202 00:43:57 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:30:04.202 00:43:57 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:30:04.202 00:43:57 -- common/autotest_common.sh@27 -- # exec 00:30:04.202 00:43:57 -- common/autotest_common.sh@29 -- # exec 00:30:04.202 00:43:57 -- common/autotest_common.sh@31 -- # xtrace_restore 00:30:04.202 00:43:57 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:30:04.202 00:43:57 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:30:04.202 00:43:57 -- common/autotest_common.sh@18 -- # set -x 00:30:04.202 00:43:57 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:04.202 00:43:57 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:30:04.202 00:43:57 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:30:04.202 00:43:57 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:30:04.202 00:43:57 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:30:04.202 00:43:57 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:30:04.202 00:43:57 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:30:04.202 00:43:57 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:30:04.202 00:43:57 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:30:04.202 00:43:57 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.202 00:43:57 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:30:04.202 00:43:57 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=142091 00:30:04.202 00:43:57 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:30:04.202 00:43:57 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:04.202 00:43:57 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 142091 /var/tmp/spdk.sock 00:30:04.202 00:43:57 -- common/autotest_common.sh@817 -- # '[' -z 142091 ']' 00:30:04.202 00:43:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.202 00:43:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:04.202 00:43:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.202 00:43:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:04.202 00:43:57 -- common/autotest_common.sh@10 -- # set +x 00:30:04.202 [2024-04-24 00:43:57.893607] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:04.202 [2024-04-24 00:43:57.893811] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142091 ] 00:30:04.460 [2024-04-24 00:43:58.087050] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:04.717 [2024-04-24 00:43:58.383453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.717 [2024-04-24 00:43:58.383603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.717 [2024-04-24 00:43:58.383609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.283 [2024-04-24 00:43:58.770969] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:05.283 00:43:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:05.283 00:43:58 -- common/autotest_common.sh@850 -- # return 0 00:30:05.283 00:43:58 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:30:05.283 00:43:58 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:05.541 Malloc0 00:30:05.541 Malloc1 00:30:05.541 Malloc2 00:30:05.541 00:43:59 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:30:05.541 00:43:59 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:30:05.541 00:43:59 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:05.541 00:43:59 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:30:05.541 5000+0 records in 00:30:05.541 5000+0 records out 00:30:05.541 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0235095 s, 436 MB/s 00:30:05.541 00:43:59 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:30:05.806 AIO0 00:30:05.806 00:43:59 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 142091 00:30:05.806 00:43:59 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 142091 without_thd 00:30:05.806 00:43:59 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=142091 00:30:05.806 00:43:59 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:30:05.806 00:43:59 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:30:05.806 00:43:59 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:30:05.806 00:43:59 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:30:05.806 00:43:59 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:30:05.806 00:43:59 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:30:05.806 00:43:59 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:05.806 00:43:59 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:30:05.806 00:43:59 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:06.064 00:43:59 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:30:06.064 00:43:59 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:30:06.064 00:43:59 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:30:06.064 00:43:59 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:30:06.064 00:43:59 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:30:06.064 00:43:59 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:30:06.064 00:43:59 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:06.064 00:43:59 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:30:06.064 00:43:59 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:06.321 00:44:00 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:30:06.321 00:44:00 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:30:06.321 spdk_thread ids are 1 on reactor0. 00:30:06.322 00:44:00 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:30:06.322 00:44:00 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:06.322 00:44:00 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142091 0 00:30:06.322 00:44:00 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142091 0 idle 00:30:06.322 00:44:00 -- interrupt/interrupt_common.sh@33 -- # local pid=142091 00:30:06.322 00:44:00 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:30:06.322 00:44:00 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:30:06.322 00:44:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:30:06.322 00:44:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:30:06.322 00:44:00 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:06.322 00:44:00 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:06.322 00:44:00 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:06.322 00:44:00 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142091 -w 256 00:30:06.322 00:44:00 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142091 root 20 0 20.1t 149100 31604 S 6.7 1.2 0:01.01 reactor_0' 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@48 -- # echo 142091 root 20 0 20.1t 149100 31604 S 6.7 1.2 0:01.01 reactor_0 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:06.580 00:44:00 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:06.580 00:44:00 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142091 1 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142091 1 idle 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@33 -- # local pid=142091 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142091 -w 256 00:30:06.580 00:44:00 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142094 root 20 0 20.1t 149100 31604 S 0.0 1.2 0:00.00 reactor_1' 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@48 -- # echo 142094 root 20 0 20.1t 149100 31604 S 0.0 1.2 0:00.00 reactor_1 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:06.838 00:44:00 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:06.838 00:44:00 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142091 2 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142091 2 idle 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@33 -- # local pid=142091 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142091 -w 256 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142095 root 20 0 20.1t 149100 31604 S 0.0 1.2 0:00.00 reactor_2' 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@48 -- # echo 142095 root 20 0 20.1t 149100 31604 S 0.0 1.2 0:00.00 reactor_2 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:30:06.838 00:44:00 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:06.838 00:44:00 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:30:06.838 00:44:00 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:30:06.838 00:44:00 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:30:07.096 [2024-04-24 00:44:00.821637] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:07.096 00:44:00 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:30:07.353 [2024-04-24 00:44:01.101311] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:30:07.353 [2024-04-24 00:44:01.101776] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:07.353 00:44:01 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:30:07.611 [2024-04-24 00:44:01.385224] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:30:07.611 [2024-04-24 00:44:01.385893] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:07.611 00:44:01 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:30:07.611 00:44:01 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142091 0 00:30:07.611 00:44:01 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142091 0 busy 00:30:07.611 00:44:01 -- interrupt/interrupt_common.sh@33 -- # local pid=142091 00:30:07.611 00:44:01 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:30:07.611 00:44:01 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:30:07.611 00:44:01 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:30:07.611 00:44:01 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142091 -w 256 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142091 root 20 0 20.1t 149236 31604 R 99.9 1.2 0:01.48 reactor_0' 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@48 -- # echo 142091 root 20 0 20.1t 149236 31604 R 99.9 1.2 0:01.48 reactor_0 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:07.869 00:44:01 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:30:07.869 00:44:01 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142091 2 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142091 2 busy 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@33 -- # local pid=142091 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142091 -w 256 00:30:07.869 00:44:01 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:30:08.127 00:44:01 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142095 root 20 0 20.1t 149236 31604 R 99.9 1.2 0:00.34 reactor_2' 00:30:08.127 00:44:01 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:08.127 00:44:01 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:08.127 00:44:01 -- interrupt/interrupt_common.sh@48 -- # echo 142095 root 20 0 20.1t 149236 31604 R 99.9 1.2 0:00.34 reactor_2 00:30:08.127 00:44:01 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:30:08.127 00:44:01 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:30:08.127 00:44:01 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:30:08.127 00:44:01 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:30:08.127 00:44:01 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:30:08.127 00:44:01 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:08.127 00:44:01 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:30:08.386 [2024-04-24 00:44:02.017096] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:30:08.386 [2024-04-24 00:44:02.017427] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:08.386 00:44:02 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:30:08.386 00:44:02 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 142091 2 00:30:08.386 00:44:02 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142091 2 idle 00:30:08.386 00:44:02 -- interrupt/interrupt_common.sh@33 -- # local pid=142091 00:30:08.386 00:44:02 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:30:08.386 00:44:02 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:30:08.386 00:44:02 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:30:08.386 00:44:02 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:30:08.386 00:44:02 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:08.386 00:44:02 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:08.386 00:44:02 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:08.386 00:44:02 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142091 -w 256 00:30:08.386 00:44:02 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:30:08.644 00:44:02 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142095 root 20 0 20.1t 149292 31604 S 0.0 1.2 0:00.62 reactor_2' 00:30:08.644 00:44:02 -- interrupt/interrupt_common.sh@48 -- # echo 142095 root 20 0 20.1t 149292 31604 S 0.0 1.2 0:00.62 reactor_2 00:30:08.644 00:44:02 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:08.644 00:44:02 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:08.644 00:44:02 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:30:08.644 00:44:02 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:30:08.644 00:44:02 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:30:08.644 00:44:02 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:30:08.644 00:44:02 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:30:08.644 00:44:02 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:08.644 00:44:02 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:30:08.903 [2024-04-24 00:44:02.464974] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:30:08.903 [2024-04-24 00:44:02.465513] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:08.903 00:44:02 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:30:08.903 00:44:02 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:30:08.904 00:44:02 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:30:09.163 [2024-04-24 00:44:02.761558] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:09.163 00:44:02 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 142091 0 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142091 0 idle 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@33 -- # local pid=142091 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142091 -w 256 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142091 root 20 0 20.1t 149384 31604 S 0.0 1.2 0:02.38 reactor_0' 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@48 -- # echo 142091 root 20 0 20.1t 149384 31604 S 0.0 1.2 0:02.38 reactor_0 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:09.163 00:44:02 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:09.423 00:44:02 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:30:09.423 00:44:02 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:30:09.423 00:44:02 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:30:09.423 00:44:02 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:30:09.423 00:44:02 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:30:09.423 00:44:02 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:09.423 00:44:02 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:30:09.423 00:44:02 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:30:09.423 00:44:02 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:30:09.423 00:44:02 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 142091 00:30:09.423 00:44:02 -- common/autotest_common.sh@936 -- # '[' -z 142091 ']' 00:30:09.423 00:44:02 -- common/autotest_common.sh@940 -- # kill -0 142091 00:30:09.423 00:44:02 -- common/autotest_common.sh@941 -- # uname 00:30:09.423 00:44:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:09.423 00:44:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142091 00:30:09.423 00:44:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:09.423 00:44:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:09.423 00:44:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142091' 00:30:09.423 killing process with pid 142091 00:30:09.423 00:44:02 -- common/autotest_common.sh@955 -- # kill 142091 00:30:09.423 00:44:02 -- common/autotest_common.sh@960 -- # wait 142091 00:30:11.323 00:44:04 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:30:11.323 00:44:04 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:30:11.323 00:44:04 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:30:11.323 00:44:04 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.323 00:44:04 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:30:11.323 00:44:04 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:30:11.323 00:44:04 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=142256 00:30:11.323 00:44:04 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:11.323 00:44:04 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 142256 /var/tmp/spdk.sock 00:30:11.323 00:44:04 -- common/autotest_common.sh@817 -- # '[' -z 142256 ']' 00:30:11.323 00:44:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.323 00:44:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:11.323 00:44:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.323 00:44:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:11.323 00:44:04 -- common/autotest_common.sh@10 -- # set +x 00:30:11.323 [2024-04-24 00:44:05.056048] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:11.323 [2024-04-24 00:44:05.056418] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142256 ] 00:30:11.591 [2024-04-24 00:44:05.249324] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:11.849 [2024-04-24 00:44:05.486293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.849 [2024-04-24 00:44:05.486438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.849 [2024-04-24 00:44:05.486434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.108 [2024-04-24 00:44:05.873361] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:12.365 00:44:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:12.365 00:44:05 -- common/autotest_common.sh@850 -- # return 0 00:30:12.365 00:44:05 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:30:12.365 00:44:05 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:12.623 Malloc0 00:30:12.623 Malloc1 00:30:12.623 Malloc2 00:30:12.881 00:44:06 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:30:12.881 00:44:06 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:30:12.881 00:44:06 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:12.881 00:44:06 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:30:12.881 5000+0 records in 00:30:12.881 5000+0 records out 00:30:12.881 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0354232 s, 289 MB/s 00:30:12.881 00:44:06 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:30:13.140 AIO0 00:30:13.140 00:44:06 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 142256 00:30:13.140 00:44:06 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 142256 00:30:13.140 00:44:06 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=142256 00:30:13.140 00:44:06 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:30:13.140 00:44:06 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:30:13.140 00:44:06 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:30:13.140 00:44:06 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:30:13.140 00:44:06 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:30:13.140 00:44:06 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:30:13.140 00:44:06 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:13.141 00:44:06 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:30:13.141 00:44:06 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:13.399 00:44:07 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:30:13.399 00:44:07 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:30:13.399 00:44:07 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:30:13.399 00:44:07 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:30:13.399 00:44:07 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:30:13.399 00:44:07 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:30:13.399 00:44:07 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:13.399 00:44:07 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:30:13.399 00:44:07 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:30:13.658 00:44:07 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:30:13.658 spdk_thread ids are 1 on reactor0. 00:30:13.658 00:44:07 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:30:13.658 00:44:07 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:30:13.658 00:44:07 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:13.658 00:44:07 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142256 0 00:30:13.658 00:44:07 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142256 0 idle 00:30:13.658 00:44:07 -- interrupt/interrupt_common.sh@33 -- # local pid=142256 00:30:13.658 00:44:07 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:30:13.658 00:44:07 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:30:13.658 00:44:07 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:30:13.658 00:44:07 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:30:13.658 00:44:07 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:13.658 00:44:07 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:13.658 00:44:07 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:13.658 00:44:07 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142256 -w 256 00:30:13.658 00:44:07 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142256 root 20 0 20.1t 149048 31560 S 0.0 1.2 0:00.98 reactor_0' 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@48 -- # echo 142256 root 20 0 20.1t 149048 31560 S 0.0 1.2 0:00.98 reactor_0 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:13.918 00:44:07 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:13.918 00:44:07 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142256 1 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142256 1 idle 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@33 -- # local pid=142256 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142256 -w 256 00:30:13.918 00:44:07 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142261 root 20 0 20.1t 149048 31560 S 0.0 1.2 0:00.00 reactor_1' 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@48 -- # echo 142261 root 20 0 20.1t 149048 31560 S 0.0 1.2 0:00.00 reactor_1 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:14.177 00:44:07 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:30:14.177 00:44:07 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142256 2 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142256 2 idle 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@33 -- # local pid=142256 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142256 -w 256 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142262 root 20 0 20.1t 149048 31560 S 0.0 1.2 0:00.00 reactor_2' 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@48 -- # echo 142262 root 20 0 20.1t 149048 31560 S 0.0 1.2 0:00.00 reactor_2 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:30:14.177 00:44:07 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:14.177 00:44:07 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:30:14.177 00:44:07 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:30:14.435 [2024-04-24 00:44:08.127756] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:30:14.435 [2024-04-24 00:44:08.127943] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:30:14.435 [2024-04-24 00:44:08.128167] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:14.435 00:44:08 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:30:14.693 [2024-04-24 00:44:08.435746] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:30:14.693 [2024-04-24 00:44:08.436343] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:14.693 00:44:08 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:30:14.694 00:44:08 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142256 0 00:30:14.694 00:44:08 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142256 0 busy 00:30:14.694 00:44:08 -- interrupt/interrupt_common.sh@33 -- # local pid=142256 00:30:14.694 00:44:08 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:30:14.694 00:44:08 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:30:14.694 00:44:08 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:30:14.694 00:44:08 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:14.694 00:44:08 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:14.694 00:44:08 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:14.694 00:44:08 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142256 -w 256 00:30:14.694 00:44:08 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142256 root 20 0 20.1t 149108 31560 R 99.9 1.2 0:01.48 reactor_0' 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@48 -- # echo 142256 root 20 0 20.1t 149108 31560 R 99.9 1.2 0:01.48 reactor_0 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:14.952 00:44:08 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:30:14.952 00:44:08 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142256 2 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142256 2 busy 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@33 -- # local pid=142256 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142256 -w 256 00:30:14.952 00:44:08 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:30:15.211 00:44:08 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142262 root 20 0 20.1t 149108 31560 R 86.7 1.2 0:00.34 reactor_2' 00:30:15.211 00:44:08 -- interrupt/interrupt_common.sh@48 -- # echo 142262 root 20 0 20.1t 149108 31560 R 86.7 1.2 0:00.34 reactor_2 00:30:15.211 00:44:08 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:15.211 00:44:08 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:15.211 00:44:08 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=86.7 00:30:15.211 00:44:08 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=86 00:30:15.211 00:44:08 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:30:15.211 00:44:08 -- interrupt/interrupt_common.sh@51 -- # [[ 86 -lt 70 ]] 00:30:15.211 00:44:08 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:30:15.211 00:44:08 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:15.211 00:44:08 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:30:15.469 [2024-04-24 00:44:09.112037] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:30:15.469 [2024-04-24 00:44:09.112442] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:15.469 00:44:09 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:30:15.469 00:44:09 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 142256 2 00:30:15.469 00:44:09 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142256 2 idle 00:30:15.469 00:44:09 -- interrupt/interrupt_common.sh@33 -- # local pid=142256 00:30:15.469 00:44:09 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:30:15.469 00:44:09 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:30:15.469 00:44:09 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:30:15.469 00:44:09 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:30:15.469 00:44:09 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:15.469 00:44:09 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:15.469 00:44:09 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:15.469 00:44:09 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142256 -w 256 00:30:15.469 00:44:09 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:30:15.793 00:44:09 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142262 root 20 0 20.1t 149200 31560 S 0.0 1.2 0:00.64 reactor_2' 00:30:15.793 00:44:09 -- interrupt/interrupt_common.sh@48 -- # echo 142262 root 20 0 20.1t 149200 31560 S 0.0 1.2 0:00.64 reactor_2 00:30:15.793 00:44:09 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:15.793 00:44:09 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:15.793 00:44:09 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:30:15.793 00:44:09 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:30:15.793 00:44:09 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:30:15.793 00:44:09 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:30:15.793 00:44:09 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:30:15.793 00:44:09 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:15.793 00:44:09 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:30:15.793 [2024-04-24 00:44:09.568119] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:30:15.793 [2024-04-24 00:44:09.568777] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:30:15.793 [2024-04-24 00:44:09.568928] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:30:16.052 00:44:09 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:30:16.052 00:44:09 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 142256 0 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142256 0 idle 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@33 -- # local pid=142256 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@41 -- # hash top 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142256 -w 256 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142256 root 20 0 20.1t 149244 31560 S 0.0 1.2 0:02.43 reactor_0' 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@48 -- # echo 142256 root 20 0 20.1t 149244 31560 S 0.0 1.2 0:02.43 reactor_0 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:30:16.052 00:44:09 -- interrupt/interrupt_common.sh@56 -- # return 0 00:30:16.052 00:44:09 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:30:16.052 00:44:09 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:30:16.052 00:44:09 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:16.052 00:44:09 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 142256 00:30:16.052 00:44:09 -- common/autotest_common.sh@936 -- # '[' -z 142256 ']' 00:30:16.052 00:44:09 -- common/autotest_common.sh@940 -- # kill -0 142256 00:30:16.052 00:44:09 -- common/autotest_common.sh@941 -- # uname 00:30:16.052 00:44:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:16.052 00:44:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142256 00:30:16.052 00:44:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:16.052 00:44:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:16.052 00:44:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142256' 00:30:16.052 killing process with pid 142256 00:30:16.052 00:44:09 -- common/autotest_common.sh@955 -- # kill 142256 00:30:16.052 00:44:09 -- common/autotest_common.sh@960 -- # wait 142256 00:30:18.586 00:44:11 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:30:18.586 00:44:11 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:30:18.586 ************************************ 00:30:18.586 END TEST reactor_set_interrupt 00:30:18.586 ************************************ 00:30:18.586 00:30:18.586 real 0m14.340s 00:30:18.586 user 0m15.298s 00:30:18.586 sys 0m1.909s 00:30:18.586 00:44:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:18.586 00:44:11 -- common/autotest_common.sh@10 -- # set +x 00:30:18.586 00:44:11 -- spdk/autotest.sh@190 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:30:18.586 00:44:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:18.586 00:44:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:18.586 00:44:11 -- common/autotest_common.sh@10 -- # set +x 00:30:18.586 ************************************ 00:30:18.586 START TEST reap_unregistered_poller 00:30:18.586 ************************************ 00:30:18.586 00:44:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:30:18.586 * Looking for test storage... 00:30:18.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:30:18.586 00:44:12 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:30:18.586 00:44:12 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:30:18.586 00:44:12 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:30:18.586 00:44:12 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:30:18.586 00:44:12 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:30:18.586 00:44:12 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:18.586 00:44:12 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:30:18.586 00:44:12 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:30:18.586 00:44:12 -- common/autotest_common.sh@34 -- # set -e 00:30:18.586 00:44:12 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:30:18.586 00:44:12 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:30:18.586 00:44:12 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:30:18.586 00:44:12 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:30:18.586 00:44:12 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:30:18.586 00:44:12 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:30:18.586 00:44:12 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:30:18.586 00:44:12 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:30:18.586 00:44:12 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:30:18.586 00:44:12 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:30:18.586 00:44:12 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:30:18.586 00:44:12 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:30:18.586 00:44:12 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:30:18.586 00:44:12 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:30:18.586 00:44:12 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:30:18.586 00:44:12 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:30:18.586 00:44:12 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:30:18.586 00:44:12 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:30:18.587 00:44:12 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:30:18.587 00:44:12 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:30:18.587 00:44:12 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:30:18.587 00:44:12 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:30:18.587 00:44:12 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:30:18.587 00:44:12 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:30:18.587 00:44:12 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:30:18.587 00:44:12 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:30:18.587 00:44:12 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:30:18.587 00:44:12 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:30:18.587 00:44:12 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:30:18.587 00:44:12 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:30:18.587 00:44:12 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:30:18.587 00:44:12 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:30:18.587 00:44:12 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:30:18.587 00:44:12 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:30:18.587 00:44:12 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:30:18.587 00:44:12 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:30:18.587 00:44:12 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:30:18.587 00:44:12 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:30:18.587 00:44:12 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:30:18.587 00:44:12 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:30:18.587 00:44:12 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:30:18.587 00:44:12 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:30:18.587 00:44:12 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:30:18.587 00:44:12 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:30:18.587 00:44:12 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:30:18.587 00:44:12 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:30:18.587 00:44:12 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:30:18.587 00:44:12 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:30:18.587 00:44:12 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:30:18.587 00:44:12 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:30:18.587 00:44:12 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:30:18.587 00:44:12 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:30:18.587 00:44:12 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:30:18.587 00:44:12 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:30:18.587 00:44:12 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:30:18.587 00:44:12 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:30:18.587 00:44:12 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:30:18.587 00:44:12 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:30:18.587 00:44:12 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:30:18.587 00:44:12 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:30:18.587 00:44:12 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:30:18.587 00:44:12 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:30:18.587 00:44:12 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:30:18.587 00:44:12 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:30:18.587 00:44:12 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:30:18.587 00:44:12 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:30:18.587 00:44:12 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:30:18.587 00:44:12 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:30:18.587 00:44:12 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:30:18.587 00:44:12 -- common/build_config.sh@65 -- # CONFIG_SHARED=n 00:30:18.587 00:44:12 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=y 00:30:18.587 00:44:12 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:30:18.587 00:44:12 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:30:18.587 00:44:12 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:30:18.587 00:44:12 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:30:18.587 00:44:12 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:30:18.587 00:44:12 -- common/build_config.sh@72 -- # CONFIG_RAID5F=y 00:30:18.587 00:44:12 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:30:18.587 00:44:12 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:30:18.587 00:44:12 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:30:18.587 00:44:12 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:30:18.587 00:44:12 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:30:18.587 00:44:12 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:30:18.587 00:44:12 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:30:18.587 00:44:12 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:30:18.587 00:44:12 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:30:18.587 00:44:12 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:30:18.587 00:44:12 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:30:18.587 00:44:12 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:30:18.587 00:44:12 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:30:18.587 00:44:12 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:30:18.587 00:44:12 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:30:18.587 00:44:12 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:30:18.587 00:44:12 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:30:18.587 00:44:12 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:30:18.587 00:44:12 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:30:18.587 00:44:12 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:30:18.587 00:44:12 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:30:18.587 00:44:12 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:30:18.587 00:44:12 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:30:18.587 00:44:12 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:30:18.587 00:44:12 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:30:18.587 00:44:12 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:30:18.587 #define SPDK_CONFIG_H 00:30:18.587 #define SPDK_CONFIG_APPS 1 00:30:18.587 #define SPDK_CONFIG_ARCH native 00:30:18.587 #define SPDK_CONFIG_ASAN 1 00:30:18.587 #undef SPDK_CONFIG_AVAHI 00:30:18.587 #undef SPDK_CONFIG_CET 00:30:18.587 #define SPDK_CONFIG_COVERAGE 1 00:30:18.587 #define SPDK_CONFIG_CROSS_PREFIX 00:30:18.587 #undef SPDK_CONFIG_CRYPTO 00:30:18.587 #undef SPDK_CONFIG_CRYPTO_MLX5 00:30:18.587 #undef SPDK_CONFIG_CUSTOMOCF 00:30:18.587 #undef SPDK_CONFIG_DAOS 00:30:18.587 #define SPDK_CONFIG_DAOS_DIR 00:30:18.587 #define SPDK_CONFIG_DEBUG 1 00:30:18.587 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:30:18.587 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:30:18.587 #define SPDK_CONFIG_DPDK_INC_DIR 00:30:18.587 #define SPDK_CONFIG_DPDK_LIB_DIR 00:30:18.587 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:30:18.587 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:30:18.587 #define SPDK_CONFIG_EXAMPLES 1 00:30:18.587 #undef SPDK_CONFIG_FC 00:30:18.587 #define SPDK_CONFIG_FC_PATH 00:30:18.587 #define SPDK_CONFIG_FIO_PLUGIN 1 00:30:18.587 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:30:18.587 #undef SPDK_CONFIG_FUSE 00:30:18.587 #undef SPDK_CONFIG_FUZZER 00:30:18.587 #define SPDK_CONFIG_FUZZER_LIB 00:30:18.587 #undef SPDK_CONFIG_GOLANG 00:30:18.587 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:30:18.587 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:30:18.587 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:30:18.587 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:30:18.587 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:30:18.587 #undef SPDK_CONFIG_HAVE_LIBBSD 00:30:18.587 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:30:18.587 #define SPDK_CONFIG_IDXD 1 00:30:18.587 #undef SPDK_CONFIG_IDXD_KERNEL 00:30:18.587 #undef SPDK_CONFIG_IPSEC_MB 00:30:18.587 #define SPDK_CONFIG_IPSEC_MB_DIR 00:30:18.587 #define SPDK_CONFIG_ISAL 1 00:30:18.587 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:30:18.587 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:30:18.587 #define SPDK_CONFIG_LIBDIR 00:30:18.587 #undef SPDK_CONFIG_LTO 00:30:18.587 #define SPDK_CONFIG_MAX_LCORES 00:30:18.587 #define SPDK_CONFIG_NVME_CUSE 1 00:30:18.587 #undef SPDK_CONFIG_OCF 00:30:18.587 #define SPDK_CONFIG_OCF_PATH 00:30:18.587 #define SPDK_CONFIG_OPENSSL_PATH 00:30:18.587 #undef SPDK_CONFIG_PGO_CAPTURE 00:30:18.587 #define SPDK_CONFIG_PGO_DIR 00:30:18.587 #undef SPDK_CONFIG_PGO_USE 00:30:18.587 #define SPDK_CONFIG_PREFIX /usr/local 00:30:18.587 #define SPDK_CONFIG_RAID5F 1 00:30:18.587 #undef SPDK_CONFIG_RBD 00:30:18.587 #define SPDK_CONFIG_RDMA 1 00:30:18.587 #define SPDK_CONFIG_RDMA_PROV verbs 00:30:18.587 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:30:18.587 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:30:18.587 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:30:18.587 #undef SPDK_CONFIG_SHARED 00:30:18.587 #undef SPDK_CONFIG_SMA 00:30:18.587 #define SPDK_CONFIG_TESTS 1 00:30:18.587 #undef SPDK_CONFIG_TSAN 00:30:18.587 #undef SPDK_CONFIG_UBLK 00:30:18.587 #define SPDK_CONFIG_UBSAN 1 00:30:18.588 #define SPDK_CONFIG_UNIT_TESTS 1 00:30:18.588 #undef SPDK_CONFIG_URING 00:30:18.588 #define SPDK_CONFIG_URING_PATH 00:30:18.588 #undef SPDK_CONFIG_URING_ZNS 00:30:18.588 #undef SPDK_CONFIG_USDT 00:30:18.588 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:30:18.588 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:30:18.588 #undef SPDK_CONFIG_VFIO_USER 00:30:18.588 #define SPDK_CONFIG_VFIO_USER_DIR 00:30:18.588 #define SPDK_CONFIG_VHOST 1 00:30:18.588 #define SPDK_CONFIG_VIRTIO 1 00:30:18.588 #undef SPDK_CONFIG_VTUNE 00:30:18.588 #define SPDK_CONFIG_VTUNE_DIR 00:30:18.588 #define SPDK_CONFIG_WERROR 1 00:30:18.588 #define SPDK_CONFIG_WPDK_DIR 00:30:18.588 #undef SPDK_CONFIG_XNVME 00:30:18.588 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:30:18.588 00:44:12 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:30:18.588 00:44:12 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:18.588 00:44:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.588 00:44:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.588 00:44:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.588 00:44:12 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:18.588 00:44:12 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:18.588 00:44:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:18.588 00:44:12 -- paths/export.sh@5 -- # export PATH 00:30:18.588 00:44:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:18.588 00:44:12 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:30:18.588 00:44:12 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:30:18.588 00:44:12 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:30:18.588 00:44:12 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:30:18.588 00:44:12 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:30:18.588 00:44:12 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:30:18.588 00:44:12 -- pm/common@67 -- # TEST_TAG=N/A 00:30:18.588 00:44:12 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:30:18.588 00:44:12 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:30:18.588 00:44:12 -- pm/common@71 -- # uname -s 00:30:18.588 00:44:12 -- pm/common@71 -- # PM_OS=Linux 00:30:18.588 00:44:12 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:30:18.588 00:44:12 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:30:18.588 00:44:12 -- pm/common@76 -- # [[ Linux == Linux ]] 00:30:18.588 00:44:12 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:30:18.588 00:44:12 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:30:18.588 00:44:12 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:30:18.588 00:44:12 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:30:18.588 00:44:12 -- common/autotest_common.sh@57 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:30:18.588 00:44:12 -- common/autotest_common.sh@61 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:30:18.588 00:44:12 -- common/autotest_common.sh@63 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:30:18.588 00:44:12 -- common/autotest_common.sh@65 -- # : 1 00:30:18.588 00:44:12 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:30:18.588 00:44:12 -- common/autotest_common.sh@67 -- # : 1 00:30:18.588 00:44:12 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:30:18.588 00:44:12 -- common/autotest_common.sh@69 -- # : 00:30:18.588 00:44:12 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:30:18.588 00:44:12 -- common/autotest_common.sh@71 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:30:18.588 00:44:12 -- common/autotest_common.sh@73 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:30:18.588 00:44:12 -- common/autotest_common.sh@75 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:30:18.588 00:44:12 -- common/autotest_common.sh@77 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:30:18.588 00:44:12 -- common/autotest_common.sh@79 -- # : 1 00:30:18.588 00:44:12 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:30:18.588 00:44:12 -- common/autotest_common.sh@81 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:30:18.588 00:44:12 -- common/autotest_common.sh@83 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:30:18.588 00:44:12 -- common/autotest_common.sh@85 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:30:18.588 00:44:12 -- common/autotest_common.sh@87 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:30:18.588 00:44:12 -- common/autotest_common.sh@89 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:30:18.588 00:44:12 -- common/autotest_common.sh@91 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:30:18.588 00:44:12 -- common/autotest_common.sh@93 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:30:18.588 00:44:12 -- common/autotest_common.sh@95 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:30:18.588 00:44:12 -- common/autotest_common.sh@97 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:30:18.588 00:44:12 -- common/autotest_common.sh@99 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:30:18.588 00:44:12 -- common/autotest_common.sh@101 -- # : rdma 00:30:18.588 00:44:12 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:30:18.588 00:44:12 -- common/autotest_common.sh@103 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:30:18.588 00:44:12 -- common/autotest_common.sh@105 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:30:18.588 00:44:12 -- common/autotest_common.sh@107 -- # : 1 00:30:18.588 00:44:12 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:30:18.588 00:44:12 -- common/autotest_common.sh@109 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:30:18.588 00:44:12 -- common/autotest_common.sh@111 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:30:18.588 00:44:12 -- common/autotest_common.sh@113 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:30:18.588 00:44:12 -- common/autotest_common.sh@115 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:30:18.588 00:44:12 -- common/autotest_common.sh@117 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:30:18.588 00:44:12 -- common/autotest_common.sh@119 -- # : 1 00:30:18.588 00:44:12 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:30:18.588 00:44:12 -- common/autotest_common.sh@121 -- # : 1 00:30:18.588 00:44:12 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:30:18.588 00:44:12 -- common/autotest_common.sh@123 -- # : 00:30:18.588 00:44:12 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:30:18.588 00:44:12 -- common/autotest_common.sh@125 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:30:18.588 00:44:12 -- common/autotest_common.sh@127 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:30:18.588 00:44:12 -- common/autotest_common.sh@129 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:30:18.588 00:44:12 -- common/autotest_common.sh@131 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:30:18.588 00:44:12 -- common/autotest_common.sh@133 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:30:18.588 00:44:12 -- common/autotest_common.sh@135 -- # : 0 00:30:18.588 00:44:12 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:30:18.588 00:44:12 -- common/autotest_common.sh@137 -- # : 00:30:18.588 00:44:12 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:30:18.588 00:44:12 -- common/autotest_common.sh@139 -- # : true 00:30:18.588 00:44:12 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:30:18.588 00:44:12 -- common/autotest_common.sh@141 -- # : 1 00:30:18.588 00:44:12 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:30:18.589 00:44:12 -- common/autotest_common.sh@143 -- # : 0 00:30:18.589 00:44:12 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:30:18.589 00:44:12 -- common/autotest_common.sh@145 -- # : 0 00:30:18.589 00:44:12 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:30:18.589 00:44:12 -- common/autotest_common.sh@147 -- # : 0 00:30:18.589 00:44:12 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:30:18.589 00:44:12 -- common/autotest_common.sh@149 -- # : 0 00:30:18.589 00:44:12 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:30:18.589 00:44:12 -- common/autotest_common.sh@151 -- # : 0 00:30:18.589 00:44:12 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:30:18.589 00:44:12 -- common/autotest_common.sh@153 -- # : 00:30:18.589 00:44:12 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:30:18.589 00:44:12 -- common/autotest_common.sh@155 -- # : 0 00:30:18.589 00:44:12 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:30:18.589 00:44:12 -- common/autotest_common.sh@157 -- # : 0 00:30:18.589 00:44:12 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:30:18.589 00:44:12 -- common/autotest_common.sh@159 -- # : 0 00:30:18.589 00:44:12 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:30:18.589 00:44:12 -- common/autotest_common.sh@161 -- # : 0 00:30:18.589 00:44:12 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:30:18.589 00:44:12 -- common/autotest_common.sh@163 -- # : 0 00:30:18.589 00:44:12 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:30:18.589 00:44:12 -- common/autotest_common.sh@166 -- # : 00:30:18.589 00:44:12 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:30:18.589 00:44:12 -- common/autotest_common.sh@168 -- # : 0 00:30:18.589 00:44:12 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:30:18.589 00:44:12 -- common/autotest_common.sh@170 -- # : 0 00:30:18.589 00:44:12 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:30:18.589 00:44:12 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:30:18.589 00:44:12 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:30:18.589 00:44:12 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:30:18.589 00:44:12 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:30:18.589 00:44:12 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:18.589 00:44:12 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:18.589 00:44:12 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:18.589 00:44:12 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:18.589 00:44:12 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:30:18.589 00:44:12 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:30:18.589 00:44:12 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:30:18.589 00:44:12 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:30:18.589 00:44:12 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:30:18.589 00:44:12 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:30:18.589 00:44:12 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:18.589 00:44:12 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:18.589 00:44:12 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:18.589 00:44:12 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:18.589 00:44:12 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:30:18.589 00:44:12 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:30:18.589 00:44:12 -- common/autotest_common.sh@199 -- # cat 00:30:18.589 00:44:12 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:30:18.589 00:44:12 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:18.589 00:44:12 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:18.589 00:44:12 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:18.589 00:44:12 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:18.589 00:44:12 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:30:18.589 00:44:12 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:30:18.589 00:44:12 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:30:18.589 00:44:12 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:30:18.589 00:44:12 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:30:18.589 00:44:12 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:30:18.589 00:44:12 -- common/autotest_common.sh@242 -- # export QEMU_BIN= 00:30:18.589 00:44:12 -- common/autotest_common.sh@242 -- # QEMU_BIN= 00:30:18.589 00:44:12 -- common/autotest_common.sh@243 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:30:18.589 00:44:12 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:30:18.589 00:44:12 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:30:18.589 00:44:12 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:30:18.589 00:44:12 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:18.589 00:44:12 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:18.589 00:44:12 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:30:18.589 00:44:12 -- common/autotest_common.sh@252 -- # export valgrind= 00:30:18.589 00:44:12 -- common/autotest_common.sh@252 -- # valgrind= 00:30:18.589 00:44:12 -- common/autotest_common.sh@258 -- # uname -s 00:30:18.589 00:44:12 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:30:18.589 00:44:12 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:30:18.589 00:44:12 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:30:18.589 00:44:12 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:30:18.589 00:44:12 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:30:18.589 00:44:12 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:30:18.589 00:44:12 -- common/autotest_common.sh@268 -- # MAKE=make 00:30:18.589 00:44:12 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:30:18.589 00:44:12 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:30:18.589 00:44:12 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:30:18.589 00:44:12 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:30:18.589 00:44:12 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:30:18.589 00:44:12 -- common/autotest_common.sh@307 -- # [[ -z 142451 ]] 00:30:18.589 00:44:12 -- common/autotest_common.sh@307 -- # kill -0 142451 00:30:18.589 00:44:12 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:30:18.589 00:44:12 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:30:18.589 00:44:12 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:30:18.589 00:44:12 -- common/autotest_common.sh@320 -- # local mount target_dir 00:30:18.589 00:44:12 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:30:18.589 00:44:12 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:30:18.589 00:44:12 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:30:18.589 00:44:12 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:30:18.590 00:44:12 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.kbEjHi 00:30:18.590 00:44:12 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:30:18.590 00:44:12 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:30:18.590 00:44:12 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:30:18.590 00:44:12 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.kbEjHi/tests/interrupt /tmp/spdk.kbEjHi 00:30:18.590 00:44:12 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:30:18.590 00:44:12 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:18.590 00:44:12 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:30:18.590 00:44:12 -- common/autotest_common.sh@316 -- # df -T 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # avails["$mount"]=1248956416 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253683200 00:30:18.590 00:44:12 -- common/autotest_common.sh@352 -- # uses["$mount"]=4726784 00:30:18.590 00:44:12 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda1 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # avails["$mount"]=10372739072 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20616794112 00:30:18.590 00:44:12 -- common/autotest_common.sh@352 -- # uses["$mount"]=10227277824 00:30:18.590 00:44:12 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # avails["$mount"]=6263693312 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6268403712 00:30:18.590 00:44:12 -- common/autotest_common.sh@352 -- # uses["$mount"]=4710400 00:30:18.590 00:44:12 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # avails["$mount"]=5242880 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5242880 00:30:18.590 00:44:12 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:30:18.590 00:44:12 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda15 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # avails["$mount"]=103061504 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # sizes["$mount"]=109395968 00:30:18.590 00:44:12 -- common/autotest_common.sh@352 -- # uses["$mount"]=6334464 00:30:18.590 00:44:12 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253675008 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253679104 00:30:18.590 00:44:12 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:30:18.590 00:44:12 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:30:18.590 00:44:12 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # avails["$mount"]=94170673152 00:30:18.590 00:44:12 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:30:18.590 00:44:12 -- common/autotest_common.sh@352 -- # uses["$mount"]=5532106752 00:30:18.590 00:44:12 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:30:18.590 00:44:12 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:30:18.590 * Looking for test storage... 00:30:18.590 00:44:12 -- common/autotest_common.sh@357 -- # local target_space new_size 00:30:18.590 00:44:12 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:30:18.590 00:44:12 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:30:18.590 00:44:12 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:30:18.590 00:44:12 -- common/autotest_common.sh@361 -- # mount=/ 00:30:18.590 00:44:12 -- common/autotest_common.sh@363 -- # target_space=10372739072 00:30:18.590 00:44:12 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:30:18.590 00:44:12 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:30:18.590 00:44:12 -- common/autotest_common.sh@369 -- # [[ ext4 == tmpfs ]] 00:30:18.590 00:44:12 -- common/autotest_common.sh@369 -- # [[ ext4 == ramfs ]] 00:30:18.590 00:44:12 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:30:18.590 00:44:12 -- common/autotest_common.sh@370 -- # new_size=12441870336 00:30:18.590 00:44:12 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:30:18.590 00:44:12 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:30:18.590 00:44:12 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:30:18.590 00:44:12 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:30:18.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:30:18.590 00:44:12 -- common/autotest_common.sh@378 -- # return 0 00:30:18.590 00:44:12 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:30:18.590 00:44:12 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:30:18.590 00:44:12 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:30:18.590 00:44:12 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:30:18.590 00:44:12 -- common/autotest_common.sh@1673 -- # true 00:30:18.590 00:44:12 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:30:18.590 00:44:12 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:30:18.590 00:44:12 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:30:18.590 00:44:12 -- common/autotest_common.sh@27 -- # exec 00:30:18.590 00:44:12 -- common/autotest_common.sh@29 -- # exec 00:30:18.590 00:44:12 -- common/autotest_common.sh@31 -- # xtrace_restore 00:30:18.590 00:44:12 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:30:18.590 00:44:12 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:30:18.590 00:44:12 -- common/autotest_common.sh@18 -- # set -x 00:30:18.590 00:44:12 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:18.590 00:44:12 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:30:18.590 00:44:12 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:30:18.590 00:44:12 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:30:18.590 00:44:12 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:30:18.590 00:44:12 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:30:18.590 00:44:12 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:30:18.590 00:44:12 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:30:18.590 00:44:12 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:30:18.590 00:44:12 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.590 00:44:12 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:30:18.590 00:44:12 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=142495 00:30:18.590 00:44:12 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:18.590 00:44:12 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:30:18.590 00:44:12 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 142495 /var/tmp/spdk.sock 00:30:18.590 00:44:12 -- common/autotest_common.sh@817 -- # '[' -z 142495 ']' 00:30:18.590 00:44:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.590 00:44:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:18.590 00:44:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.590 00:44:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:18.590 00:44:12 -- common/autotest_common.sh@10 -- # set +x 00:30:18.590 [2024-04-24 00:44:12.346843] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:18.590 [2024-04-24 00:44:12.347383] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142495 ] 00:30:18.850 [2024-04-24 00:44:12.542072] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:19.108 [2024-04-24 00:44:12.826715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.108 [2024-04-24 00:44:12.826816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.108 [2024-04-24 00:44:12.826812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:19.674 [2024-04-24 00:44:13.238225] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:19.674 00:44:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:19.674 00:44:13 -- common/autotest_common.sh@850 -- # return 0 00:30:19.674 00:44:13 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:30:19.674 00:44:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.674 00:44:13 -- common/autotest_common.sh@10 -- # set +x 00:30:19.674 00:44:13 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:30:19.674 00:44:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.674 00:44:13 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:30:19.674 "name": "app_thread", 00:30:19.674 "id": 1, 00:30:19.674 "active_pollers": [], 00:30:19.674 "timed_pollers": [ 00:30:19.674 { 00:30:19.674 "name": "rpc_subsystem_poll_servers", 00:30:19.674 "id": 1, 00:30:19.674 "state": "waiting", 00:30:19.674 "run_count": 0, 00:30:19.674 "busy_count": 0, 00:30:19.674 "period_ticks": 8400000 00:30:19.674 } 00:30:19.674 ], 00:30:19.674 "paused_pollers": [] 00:30:19.674 }' 00:30:19.674 00:44:13 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:30:19.933 00:44:13 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:30:19.933 00:44:13 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:30:19.933 00:44:13 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:30:19.933 00:44:13 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:30:19.933 00:44:13 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:30:19.933 00:44:13 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:30:19.933 00:44:13 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:19.933 00:44:13 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:30:19.933 5000+0 records in 00:30:19.933 5000+0 records out 00:30:19.933 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0306905 s, 334 MB/s 00:30:19.933 00:44:13 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:30:20.192 AIO0 00:30:20.192 00:44:13 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:20.473 00:44:14 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:30:20.731 00:44:14 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:30:20.731 00:44:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:20.731 00:44:14 -- common/autotest_common.sh@10 -- # set +x 00:30:20.731 00:44:14 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:30:20.731 00:44:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:20.731 00:44:14 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:30:20.731 "name": "app_thread", 00:30:20.731 "id": 1, 00:30:20.731 "active_pollers": [], 00:30:20.731 "timed_pollers": [ 00:30:20.731 { 00:30:20.731 "name": "rpc_subsystem_poll_servers", 00:30:20.731 "id": 1, 00:30:20.731 "state": "waiting", 00:30:20.731 "run_count": 0, 00:30:20.731 "busy_count": 0, 00:30:20.731 "period_ticks": 8400000 00:30:20.731 } 00:30:20.731 ], 00:30:20.731 "paused_pollers": [] 00:30:20.731 }' 00:30:20.731 00:44:14 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:30:20.731 00:44:14 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:30:20.731 00:44:14 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:30:20.731 00:44:14 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:30:20.731 00:44:14 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:30:20.731 00:44:14 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:30:20.731 00:44:14 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:30:20.731 00:44:14 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 142495 00:30:20.731 00:44:14 -- common/autotest_common.sh@936 -- # '[' -z 142495 ']' 00:30:20.731 00:44:14 -- common/autotest_common.sh@940 -- # kill -0 142495 00:30:20.731 00:44:14 -- common/autotest_common.sh@941 -- # uname 00:30:20.731 00:44:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:20.731 00:44:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142495 00:30:20.731 00:44:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:20.731 00:44:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:20.731 00:44:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142495' 00:30:20.731 killing process with pid 142495 00:30:20.731 00:44:14 -- common/autotest_common.sh@955 -- # kill 142495 00:30:20.731 00:44:14 -- common/autotest_common.sh@960 -- # wait 142495 00:30:22.633 00:44:16 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:30:22.633 00:44:16 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:30:22.633 ************************************ 00:30:22.633 END TEST reap_unregistered_poller 00:30:22.633 ************************************ 00:30:22.633 00:30:22.633 real 0m4.211s 00:30:22.633 user 0m3.753s 00:30:22.633 sys 0m0.656s 00:30:22.634 00:44:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:22.634 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:30:22.634 00:44:16 -- spdk/autotest.sh@194 -- # uname -s 00:30:22.634 00:44:16 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:30:22.634 00:44:16 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:30:22.634 00:44:16 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:30:22.634 00:44:16 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:30:22.634 00:44:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:22.634 00:44:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:22.634 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:30:22.634 ************************************ 00:30:22.634 START TEST spdk_dd 00:30:22.634 ************************************ 00:30:22.634 00:44:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:30:22.634 * Looking for test storage... 00:30:22.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:22.892 00:44:16 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:22.892 00:44:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.892 00:44:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.892 00:44:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.892 00:44:16 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:22.892 00:44:16 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:22.892 00:44:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:22.892 00:44:16 -- paths/export.sh@5 -- # export PATH 00:30:22.892 00:44:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:22.892 00:44:16 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:23.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:23.151 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:24.086 00:44:17 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:30:24.086 00:44:17 -- dd/dd.sh@11 -- # nvme_in_userspace 00:30:24.086 00:44:17 -- scripts/common.sh@309 -- # local bdf bdfs 00:30:24.086 00:44:17 -- scripts/common.sh@310 -- # local nvmes 00:30:24.086 00:44:17 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:30:24.086 00:44:17 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:24.086 00:44:17 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:30:24.086 00:44:17 -- scripts/common.sh@295 -- # local bdf= 00:30:24.086 00:44:17 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:30:24.086 00:44:17 -- scripts/common.sh@230 -- # local class 00:30:24.086 00:44:17 -- scripts/common.sh@231 -- # local subclass 00:30:24.086 00:44:17 -- scripts/common.sh@232 -- # local progif 00:30:24.086 00:44:17 -- scripts/common.sh@233 -- # printf %02x 1 00:30:24.086 00:44:17 -- scripts/common.sh@233 -- # class=01 00:30:24.086 00:44:17 -- scripts/common.sh@234 -- # printf %02x 8 00:30:24.086 00:44:17 -- scripts/common.sh@234 -- # subclass=08 00:30:24.086 00:44:17 -- scripts/common.sh@235 -- # printf %02x 2 00:30:24.086 00:44:17 -- scripts/common.sh@235 -- # progif=02 00:30:24.086 00:44:17 -- scripts/common.sh@237 -- # hash lspci 00:30:24.086 00:44:17 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:30:24.086 00:44:17 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:30:24.086 00:44:17 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:24.086 00:44:17 -- scripts/common.sh@240 -- # grep -i -- -p02 00:30:24.086 00:44:17 -- scripts/common.sh@242 -- # tr -d '"' 00:30:24.086 00:44:17 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:24.086 00:44:17 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:30:24.086 00:44:17 -- scripts/common.sh@15 -- # local i 00:30:24.086 00:44:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:30:24.086 00:44:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:24.086 00:44:17 -- scripts/common.sh@24 -- # return 0 00:30:24.086 00:44:17 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:30:24.086 00:44:17 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:24.086 00:44:17 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:24.086 00:44:17 -- scripts/common.sh@320 -- # uname -s 00:30:24.086 00:44:17 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:24.086 00:44:17 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:24.086 00:44:17 -- scripts/common.sh@325 -- # (( 1 )) 00:30:24.086 00:44:17 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:30:24.086 00:44:17 -- dd/dd.sh@13 -- # check_liburing 00:30:24.086 00:44:17 -- dd/common.sh@139 -- # local lib so 00:30:24.086 00:44:17 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:30:24.086 00:44:17 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:30:24.086 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.086 00:44:17 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:30:24.087 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.087 00:44:17 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:30:24.087 00:44:17 -- dd/common.sh@142 -- # read -r lib _ so _ 00:30:24.087 00:44:17 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:30:24.087 00:44:17 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:30:24.087 00:44:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:24.087 00:44:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:24.087 00:44:17 -- common/autotest_common.sh@10 -- # set +x 00:30:24.087 ************************************ 00:30:24.087 START TEST spdk_dd_basic_rw 00:30:24.087 ************************************ 00:30:24.087 00:44:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:30:24.345 * Looking for test storage... 00:30:24.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:24.346 00:44:17 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:24.346 00:44:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.346 00:44:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.346 00:44:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.346 00:44:17 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:24.346 00:44:17 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:24.346 00:44:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:24.346 00:44:17 -- paths/export.sh@5 -- # export PATH 00:30:24.346 00:44:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:24.346 00:44:17 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:30:24.346 00:44:17 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:30:24.346 00:44:17 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:30:24.346 00:44:17 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:30:24.346 00:44:17 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:30:24.346 00:44:17 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:30:24.346 00:44:17 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:30:24.346 00:44:17 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:24.346 00:44:17 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:24.346 00:44:17 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:30:24.346 00:44:17 -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:30:24.346 00:44:17 -- dd/common.sh@126 -- # mapfile -t id 00:30:24.346 00:44:17 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:30:24.606 00:44:18 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 103 Data Units Written: 7 Host Read Commands: 2255 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:30:24.606 00:44:18 -- dd/common.sh@130 -- # lbaf=04 00:30:24.606 00:44:18 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 103 Data Units Written: 7 Host Read Commands: 2255 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:30:24.606 00:44:18 -- dd/common.sh@132 -- # lbaf=4096 00:30:24.606 00:44:18 -- dd/common.sh@134 -- # echo 4096 00:30:24.606 00:44:18 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:30:24.606 00:44:18 -- dd/basic_rw.sh@96 -- # : 00:30:24.606 00:44:18 -- dd/basic_rw.sh@96 -- # gen_conf 00:30:24.607 00:44:18 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:30:24.607 00:44:18 -- dd/common.sh@31 -- # xtrace_disable 00:30:24.607 00:44:18 -- common/autotest_common.sh@10 -- # set +x 00:30:24.607 00:44:18 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:30:24.607 00:44:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:24.607 00:44:18 -- common/autotest_common.sh@10 -- # set +x 00:30:24.607 { 00:30:24.607 "subsystems": [ 00:30:24.607 { 00:30:24.607 "subsystem": "bdev", 00:30:24.607 "config": [ 00:30:24.607 { 00:30:24.607 "params": { 00:30:24.607 "trtype": "pcie", 00:30:24.607 "traddr": "0000:00:10.0", 00:30:24.607 "name": "Nvme0" 00:30:24.607 }, 00:30:24.607 "method": "bdev_nvme_attach_controller" 00:30:24.607 }, 00:30:24.607 { 00:30:24.607 "method": "bdev_wait_for_examine" 00:30:24.607 } 00:30:24.607 ] 00:30:24.607 } 00:30:24.607 ] 00:30:24.607 } 00:30:24.607 ************************************ 00:30:24.607 START TEST dd_bs_lt_native_bs 00:30:24.607 ************************************ 00:30:24.607 00:44:18 -- common/autotest_common.sh@1111 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:30:24.607 00:44:18 -- common/autotest_common.sh@638 -- # local es=0 00:30:24.607 00:44:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:30:24.607 00:44:18 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:24.607 00:44:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:24.607 00:44:18 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:24.607 00:44:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:24.607 00:44:18 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:24.607 00:44:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:24.607 00:44:18 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:24.607 00:44:18 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:24.607 00:44:18 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:30:24.607 [2024-04-24 00:44:18.395959] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:24.607 [2024-04-24 00:44:18.396162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142838 ] 00:30:24.865 [2024-04-24 00:44:18.579060] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.125 [2024-04-24 00:44:18.802356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.691 [2024-04-24 00:44:19.237565] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:30:25.691 [2024-04-24 00:44:19.237658] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:26.625 [2024-04-24 00:44:20.133939] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:30:26.927 00:44:20 -- common/autotest_common.sh@641 -- # es=234 00:30:26.927 00:44:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:26.927 00:44:20 -- common/autotest_common.sh@650 -- # es=106 00:30:26.927 00:44:20 -- common/autotest_common.sh@651 -- # case "$es" in 00:30:26.927 00:44:20 -- common/autotest_common.sh@658 -- # es=1 00:30:26.927 00:44:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:26.927 00:30:26.927 real 0m2.303s 00:30:26.927 user 0m1.981s 00:30:26.927 sys 0m0.222s 00:30:26.927 ************************************ 00:30:26.927 END TEST dd_bs_lt_native_bs 00:30:26.927 00:44:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:26.927 00:44:20 -- common/autotest_common.sh@10 -- # set +x 00:30:26.927 ************************************ 00:30:26.927 00:44:20 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:30:26.927 00:44:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:26.927 00:44:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:26.927 00:44:20 -- common/autotest_common.sh@10 -- # set +x 00:30:26.927 ************************************ 00:30:26.927 START TEST dd_rw 00:30:26.927 ************************************ 00:30:26.927 00:44:20 -- common/autotest_common.sh@1111 -- # basic_rw 4096 00:30:26.927 00:44:20 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:30:26.927 00:44:20 -- dd/basic_rw.sh@12 -- # local count size 00:30:26.927 00:44:20 -- dd/basic_rw.sh@13 -- # local qds bss 00:30:26.927 00:44:20 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:30:26.927 00:44:20 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:30:26.927 00:44:20 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:30:26.927 00:44:20 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:30:26.927 00:44:20 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:30:26.927 00:44:20 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:30:26.927 00:44:20 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:30:26.927 00:44:20 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:30:26.927 00:44:20 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:26.927 00:44:20 -- dd/basic_rw.sh@23 -- # count=15 00:30:26.928 00:44:20 -- dd/basic_rw.sh@24 -- # count=15 00:30:26.928 00:44:20 -- dd/basic_rw.sh@25 -- # size=61440 00:30:26.928 00:44:20 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:30:26.928 00:44:20 -- dd/common.sh@98 -- # xtrace_disable 00:30:26.928 00:44:20 -- common/autotest_common.sh@10 -- # set +x 00:30:27.860 00:44:21 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:30:27.860 00:44:21 -- dd/basic_rw.sh@30 -- # gen_conf 00:30:27.860 00:44:21 -- dd/common.sh@31 -- # xtrace_disable 00:30:27.860 00:44:21 -- common/autotest_common.sh@10 -- # set +x 00:30:27.860 { 00:30:27.860 "subsystems": [ 00:30:27.860 { 00:30:27.860 "subsystem": "bdev", 00:30:27.860 "config": [ 00:30:27.860 { 00:30:27.860 "params": { 00:30:27.860 "trtype": "pcie", 00:30:27.860 "traddr": "0000:00:10.0", 00:30:27.860 "name": "Nvme0" 00:30:27.860 }, 00:30:27.860 "method": "bdev_nvme_attach_controller" 00:30:27.860 }, 00:30:27.860 { 00:30:27.860 "method": "bdev_wait_for_examine" 00:30:27.860 } 00:30:27.860 ] 00:30:27.860 } 00:30:27.860 ] 00:30:27.860 } 00:30:27.860 [2024-04-24 00:44:21.384243] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:27.860 [2024-04-24 00:44:21.384929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142904 ] 00:30:27.860 [2024-04-24 00:44:21.566303] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.118 [2024-04-24 00:44:21.856557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.096  Copying: 60/60 [kB] (average 19 MBps) 00:30:30.096 00:30:30.096 00:44:23 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:30:30.096 00:44:23 -- dd/basic_rw.sh@37 -- # gen_conf 00:30:30.096 00:44:23 -- dd/common.sh@31 -- # xtrace_disable 00:30:30.096 00:44:23 -- common/autotest_common.sh@10 -- # set +x 00:30:30.096 { 00:30:30.096 "subsystems": [ 00:30:30.096 { 00:30:30.096 "subsystem": "bdev", 00:30:30.096 "config": [ 00:30:30.096 { 00:30:30.096 "params": { 00:30:30.096 "trtype": "pcie", 00:30:30.096 "traddr": "0000:00:10.0", 00:30:30.096 "name": "Nvme0" 00:30:30.096 }, 00:30:30.096 "method": "bdev_nvme_attach_controller" 00:30:30.096 }, 00:30:30.096 { 00:30:30.096 "method": "bdev_wait_for_examine" 00:30:30.096 } 00:30:30.096 ] 00:30:30.096 } 00:30:30.096 ] 00:30:30.096 } 00:30:30.096 [2024-04-24 00:44:23.614667] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:30.097 [2024-04-24 00:44:23.614831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142936 ] 00:30:30.097 [2024-04-24 00:44:23.777142] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.355 [2024-04-24 00:44:24.001720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.296  Copying: 60/60 [kB] (average 19 MBps) 00:30:32.296 00:30:32.296 00:44:25 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:32.296 00:44:25 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:30:32.296 00:44:25 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:32.296 00:44:25 -- dd/common.sh@11 -- # local nvme_ref= 00:30:32.296 00:44:25 -- dd/common.sh@12 -- # local size=61440 00:30:32.296 00:44:25 -- dd/common.sh@14 -- # local bs=1048576 00:30:32.296 00:44:25 -- dd/common.sh@15 -- # local count=1 00:30:32.296 00:44:25 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:32.296 00:44:25 -- dd/common.sh@18 -- # gen_conf 00:30:32.296 00:44:25 -- dd/common.sh@31 -- # xtrace_disable 00:30:32.296 00:44:25 -- common/autotest_common.sh@10 -- # set +x 00:30:32.296 [2024-04-24 00:44:25.880878] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:32.296 [2024-04-24 00:44:25.881023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142966 ] 00:30:32.296 { 00:30:32.296 "subsystems": [ 00:30:32.296 { 00:30:32.296 "subsystem": "bdev", 00:30:32.296 "config": [ 00:30:32.296 { 00:30:32.296 "params": { 00:30:32.296 "trtype": "pcie", 00:30:32.296 "traddr": "0000:00:10.0", 00:30:32.296 "name": "Nvme0" 00:30:32.296 }, 00:30:32.296 "method": "bdev_nvme_attach_controller" 00:30:32.296 }, 00:30:32.296 { 00:30:32.296 "method": "bdev_wait_for_examine" 00:30:32.296 } 00:30:32.296 ] 00:30:32.296 } 00:30:32.296 ] 00:30:32.296 } 00:30:32.296 [2024-04-24 00:44:26.050277] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.555 [2024-04-24 00:44:26.331818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.494  Copying: 1024/1024 [kB] (average 500 MBps) 00:30:34.494 00:30:34.494 00:44:27 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:34.494 00:44:27 -- dd/basic_rw.sh@23 -- # count=15 00:30:34.494 00:44:27 -- dd/basic_rw.sh@24 -- # count=15 00:30:34.494 00:44:27 -- dd/basic_rw.sh@25 -- # size=61440 00:30:34.494 00:44:27 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:30:34.494 00:44:27 -- dd/common.sh@98 -- # xtrace_disable 00:30:34.494 00:44:27 -- common/autotest_common.sh@10 -- # set +x 00:30:35.058 00:44:28 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:30:35.058 00:44:28 -- dd/basic_rw.sh@30 -- # gen_conf 00:30:35.058 00:44:28 -- dd/common.sh@31 -- # xtrace_disable 00:30:35.058 00:44:28 -- common/autotest_common.sh@10 -- # set +x 00:30:35.058 { 00:30:35.058 "subsystems": [ 00:30:35.058 { 00:30:35.058 "subsystem": "bdev", 00:30:35.058 "config": [ 00:30:35.058 { 00:30:35.058 "params": { 00:30:35.058 "trtype": "pcie", 00:30:35.058 "traddr": "0000:00:10.0", 00:30:35.058 "name": "Nvme0" 00:30:35.058 }, 00:30:35.058 "method": "bdev_nvme_attach_controller" 00:30:35.058 }, 00:30:35.058 { 00:30:35.058 "method": "bdev_wait_for_examine" 00:30:35.058 } 00:30:35.058 ] 00:30:35.058 } 00:30:35.058 ] 00:30:35.058 } 00:30:35.058 [2024-04-24 00:44:28.753595] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:35.058 [2024-04-24 00:44:28.753788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143009 ] 00:30:35.315 [2024-04-24 00:44:28.932156] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.572 [2024-04-24 00:44:29.160298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.246  Copying: 60/60 [kB] (average 58 MBps) 00:30:37.246 00:30:37.246 00:44:31 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:30:37.246 00:44:31 -- dd/basic_rw.sh@37 -- # gen_conf 00:30:37.246 00:44:31 -- dd/common.sh@31 -- # xtrace_disable 00:30:37.246 00:44:31 -- common/autotest_common.sh@10 -- # set +x 00:30:37.504 { 00:30:37.504 "subsystems": [ 00:30:37.504 { 00:30:37.504 "subsystem": "bdev", 00:30:37.504 "config": [ 00:30:37.504 { 00:30:37.504 "params": { 00:30:37.504 "trtype": "pcie", 00:30:37.504 "traddr": "0000:00:10.0", 00:30:37.504 "name": "Nvme0" 00:30:37.504 }, 00:30:37.504 "method": "bdev_nvme_attach_controller" 00:30:37.504 }, 00:30:37.504 { 00:30:37.504 "method": "bdev_wait_for_examine" 00:30:37.504 } 00:30:37.504 ] 00:30:37.504 } 00:30:37.504 ] 00:30:37.504 } 00:30:37.504 [2024-04-24 00:44:31.112139] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:37.504 [2024-04-24 00:44:31.112413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143038 ] 00:30:37.504 [2024-04-24 00:44:31.297494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.069 [2024-04-24 00:44:31.590522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.700  Copying: 60/60 [kB] (average 58 MBps) 00:30:39.700 00:30:39.700 00:44:33 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:39.700 00:44:33 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:30:39.700 00:44:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:39.700 00:44:33 -- dd/common.sh@11 -- # local nvme_ref= 00:30:39.700 00:44:33 -- dd/common.sh@12 -- # local size=61440 00:30:39.700 00:44:33 -- dd/common.sh@14 -- # local bs=1048576 00:30:39.700 00:44:33 -- dd/common.sh@15 -- # local count=1 00:30:39.700 00:44:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:39.700 00:44:33 -- dd/common.sh@18 -- # gen_conf 00:30:39.700 00:44:33 -- dd/common.sh@31 -- # xtrace_disable 00:30:39.700 00:44:33 -- common/autotest_common.sh@10 -- # set +x 00:30:39.700 { 00:30:39.700 "subsystems": [ 00:30:39.700 { 00:30:39.700 "subsystem": "bdev", 00:30:39.700 "config": [ 00:30:39.700 { 00:30:39.700 "params": { 00:30:39.700 "trtype": "pcie", 00:30:39.700 "traddr": "0000:00:10.0", 00:30:39.700 "name": "Nvme0" 00:30:39.700 }, 00:30:39.700 "method": "bdev_nvme_attach_controller" 00:30:39.700 }, 00:30:39.700 { 00:30:39.700 "method": "bdev_wait_for_examine" 00:30:39.700 } 00:30:39.700 ] 00:30:39.700 } 00:30:39.700 ] 00:30:39.700 } 00:30:39.700 [2024-04-24 00:44:33.362462] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:39.700 [2024-04-24 00:44:33.362606] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143070 ] 00:30:39.958 [2024-04-24 00:44:33.531726] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.217 [2024-04-24 00:44:33.805728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.849  Copying: 1024/1024 [kB] (average 500 MBps) 00:30:41.849 00:30:42.108 00:44:35 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:30:42.108 00:44:35 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:42.108 00:44:35 -- dd/basic_rw.sh@23 -- # count=7 00:30:42.108 00:44:35 -- dd/basic_rw.sh@24 -- # count=7 00:30:42.108 00:44:35 -- dd/basic_rw.sh@25 -- # size=57344 00:30:42.108 00:44:35 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:30:42.108 00:44:35 -- dd/common.sh@98 -- # xtrace_disable 00:30:42.108 00:44:35 -- common/autotest_common.sh@10 -- # set +x 00:30:42.674 00:44:36 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:30:42.674 00:44:36 -- dd/basic_rw.sh@30 -- # gen_conf 00:30:42.674 00:44:36 -- dd/common.sh@31 -- # xtrace_disable 00:30:42.674 00:44:36 -- common/autotest_common.sh@10 -- # set +x 00:30:42.674 { 00:30:42.674 "subsystems": [ 00:30:42.674 { 00:30:42.674 "subsystem": "bdev", 00:30:42.674 "config": [ 00:30:42.674 { 00:30:42.674 "params": { 00:30:42.674 "trtype": "pcie", 00:30:42.674 "traddr": "0000:00:10.0", 00:30:42.674 "name": "Nvme0" 00:30:42.674 }, 00:30:42.674 "method": "bdev_nvme_attach_controller" 00:30:42.674 }, 00:30:42.674 { 00:30:42.674 "method": "bdev_wait_for_examine" 00:30:42.674 } 00:30:42.674 ] 00:30:42.674 } 00:30:42.674 ] 00:30:42.674 } 00:30:42.674 [2024-04-24 00:44:36.251548] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:42.674 [2024-04-24 00:44:36.251763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143111 ] 00:30:42.674 [2024-04-24 00:44:36.432105] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.239 [2024-04-24 00:44:36.726358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.870  Copying: 56/56 [kB] (average 54 MBps) 00:30:44.870 00:30:44.870 00:44:38 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:30:44.870 00:44:38 -- dd/basic_rw.sh@37 -- # gen_conf 00:30:44.870 00:44:38 -- dd/common.sh@31 -- # xtrace_disable 00:30:44.870 00:44:38 -- common/autotest_common.sh@10 -- # set +x 00:30:44.870 { 00:30:44.870 "subsystems": [ 00:30:44.870 { 00:30:44.870 "subsystem": "bdev", 00:30:44.870 "config": [ 00:30:44.870 { 00:30:44.870 "params": { 00:30:44.870 "trtype": "pcie", 00:30:44.870 "traddr": "0000:00:10.0", 00:30:44.870 "name": "Nvme0" 00:30:44.870 }, 00:30:44.870 "method": "bdev_nvme_attach_controller" 00:30:44.870 }, 00:30:44.870 { 00:30:44.870 "method": "bdev_wait_for_examine" 00:30:44.870 } 00:30:44.870 ] 00:30:44.870 } 00:30:44.870 ] 00:30:44.870 } 00:30:44.870 [2024-04-24 00:44:38.565920] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:44.870 [2024-04-24 00:44:38.566070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143150 ] 00:30:45.129 [2024-04-24 00:44:38.733155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.388 [2024-04-24 00:44:38.959604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.022  Copying: 56/56 [kB] (average 27 MBps) 00:30:47.022 00:30:47.022 00:44:40 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:47.022 00:44:40 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:30:47.022 00:44:40 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:47.022 00:44:40 -- dd/common.sh@11 -- # local nvme_ref= 00:30:47.022 00:44:40 -- dd/common.sh@12 -- # local size=57344 00:30:47.022 00:44:40 -- dd/common.sh@14 -- # local bs=1048576 00:30:47.022 00:44:40 -- dd/common.sh@15 -- # local count=1 00:30:47.280 00:44:40 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:47.280 00:44:40 -- dd/common.sh@18 -- # gen_conf 00:30:47.280 00:44:40 -- dd/common.sh@31 -- # xtrace_disable 00:30:47.280 00:44:40 -- common/autotest_common.sh@10 -- # set +x 00:30:47.280 { 00:30:47.280 "subsystems": [ 00:30:47.280 { 00:30:47.280 "subsystem": "bdev", 00:30:47.280 "config": [ 00:30:47.280 { 00:30:47.280 "params": { 00:30:47.280 "trtype": "pcie", 00:30:47.280 "traddr": "0000:00:10.0", 00:30:47.280 "name": "Nvme0" 00:30:47.280 }, 00:30:47.280 "method": "bdev_nvme_attach_controller" 00:30:47.280 }, 00:30:47.280 { 00:30:47.280 "method": "bdev_wait_for_examine" 00:30:47.280 } 00:30:47.280 ] 00:30:47.280 } 00:30:47.280 ] 00:30:47.280 } 00:30:47.280 [2024-04-24 00:44:40.888579] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:47.280 [2024-04-24 00:44:40.888773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143183 ] 00:30:47.280 [2024-04-24 00:44:41.067689] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.538 [2024-04-24 00:44:41.294417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.477  Copying: 1024/1024 [kB] (average 1000 MBps) 00:30:49.477 00:30:49.477 00:44:42 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:49.477 00:44:42 -- dd/basic_rw.sh@23 -- # count=7 00:30:49.477 00:44:42 -- dd/basic_rw.sh@24 -- # count=7 00:30:49.477 00:44:42 -- dd/basic_rw.sh@25 -- # size=57344 00:30:49.477 00:44:42 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:30:49.477 00:44:42 -- dd/common.sh@98 -- # xtrace_disable 00:30:49.477 00:44:42 -- common/autotest_common.sh@10 -- # set +x 00:30:50.042 00:44:43 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:30:50.042 00:44:43 -- dd/basic_rw.sh@30 -- # gen_conf 00:30:50.042 00:44:43 -- dd/common.sh@31 -- # xtrace_disable 00:30:50.042 00:44:43 -- common/autotest_common.sh@10 -- # set +x 00:30:50.042 { 00:30:50.042 "subsystems": [ 00:30:50.042 { 00:30:50.042 "subsystem": "bdev", 00:30:50.042 "config": [ 00:30:50.042 { 00:30:50.042 "params": { 00:30:50.042 "trtype": "pcie", 00:30:50.042 "traddr": "0000:00:10.0", 00:30:50.042 "name": "Nvme0" 00:30:50.042 }, 00:30:50.042 "method": "bdev_nvme_attach_controller" 00:30:50.042 }, 00:30:50.042 { 00:30:50.042 "method": "bdev_wait_for_examine" 00:30:50.042 } 00:30:50.042 ] 00:30:50.042 } 00:30:50.042 ] 00:30:50.042 } 00:30:50.042 [2024-04-24 00:44:43.600761] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:50.042 [2024-04-24 00:44:43.600950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143215 ] 00:30:50.042 [2024-04-24 00:44:43.764103] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.301 [2024-04-24 00:44:44.023015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.238  Copying: 56/56 [kB] (average 54 MBps) 00:30:52.238 00:30:52.238 00:44:45 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:30:52.239 00:44:45 -- dd/basic_rw.sh@37 -- # gen_conf 00:30:52.239 00:44:45 -- dd/common.sh@31 -- # xtrace_disable 00:30:52.239 00:44:45 -- common/autotest_common.sh@10 -- # set +x 00:30:52.239 { 00:30:52.239 "subsystems": [ 00:30:52.239 { 00:30:52.239 "subsystem": "bdev", 00:30:52.239 "config": [ 00:30:52.239 { 00:30:52.239 "params": { 00:30:52.239 "trtype": "pcie", 00:30:52.239 "traddr": "0000:00:10.0", 00:30:52.239 "name": "Nvme0" 00:30:52.239 }, 00:30:52.239 "method": "bdev_nvme_attach_controller" 00:30:52.239 }, 00:30:52.239 { 00:30:52.239 "method": "bdev_wait_for_examine" 00:30:52.239 } 00:30:52.239 ] 00:30:52.239 } 00:30:52.239 ] 00:30:52.239 } 00:30:52.239 [2024-04-24 00:44:45.898696] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:52.239 [2024-04-24 00:44:45.898897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143255 ] 00:30:52.497 [2024-04-24 00:44:46.080291] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.755 [2024-04-24 00:44:46.305793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.390  Copying: 56/56 [kB] (average 54 MBps) 00:30:54.390 00:30:54.390 00:44:48 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:54.390 00:44:48 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:30:54.390 00:44:48 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:54.390 00:44:48 -- dd/common.sh@11 -- # local nvme_ref= 00:30:54.390 00:44:48 -- dd/common.sh@12 -- # local size=57344 00:30:54.390 00:44:48 -- dd/common.sh@14 -- # local bs=1048576 00:30:54.390 00:44:48 -- dd/common.sh@15 -- # local count=1 00:30:54.390 00:44:48 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:30:54.390 00:44:48 -- dd/common.sh@18 -- # gen_conf 00:30:54.390 00:44:48 -- dd/common.sh@31 -- # xtrace_disable 00:30:54.390 00:44:48 -- common/autotest_common.sh@10 -- # set +x 00:30:54.390 { 00:30:54.390 "subsystems": [ 00:30:54.390 { 00:30:54.390 "subsystem": "bdev", 00:30:54.390 "config": [ 00:30:54.390 { 00:30:54.390 "params": { 00:30:54.390 "trtype": "pcie", 00:30:54.390 "traddr": "0000:00:10.0", 00:30:54.390 "name": "Nvme0" 00:30:54.390 }, 00:30:54.390 "method": "bdev_nvme_attach_controller" 00:30:54.390 }, 00:30:54.390 { 00:30:54.390 "method": "bdev_wait_for_examine" 00:30:54.390 } 00:30:54.390 ] 00:30:54.390 } 00:30:54.390 ] 00:30:54.390 } 00:30:54.390 [2024-04-24 00:44:48.122137] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:54.390 [2024-04-24 00:44:48.122465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143284 ] 00:30:54.649 [2024-04-24 00:44:48.315375] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.907 [2024-04-24 00:44:48.606732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.873  Copying: 1024/1024 [kB] (average 1000 MBps) 00:30:56.874 00:30:56.874 00:44:50 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:30:56.874 00:44:50 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:30:56.874 00:44:50 -- dd/basic_rw.sh@23 -- # count=3 00:30:56.874 00:44:50 -- dd/basic_rw.sh@24 -- # count=3 00:30:56.874 00:44:50 -- dd/basic_rw.sh@25 -- # size=49152 00:30:56.874 00:44:50 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:30:56.874 00:44:50 -- dd/common.sh@98 -- # xtrace_disable 00:30:56.874 00:44:50 -- common/autotest_common.sh@10 -- # set +x 00:30:57.440 00:44:50 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:30:57.440 00:44:50 -- dd/basic_rw.sh@30 -- # gen_conf 00:30:57.440 00:44:50 -- dd/common.sh@31 -- # xtrace_disable 00:30:57.440 00:44:50 -- common/autotest_common.sh@10 -- # set +x 00:30:57.440 { 00:30:57.440 "subsystems": [ 00:30:57.440 { 00:30:57.440 "subsystem": "bdev", 00:30:57.440 "config": [ 00:30:57.440 { 00:30:57.440 "params": { 00:30:57.440 "trtype": "pcie", 00:30:57.440 "traddr": "0000:00:10.0", 00:30:57.440 "name": "Nvme0" 00:30:57.440 }, 00:30:57.441 "method": "bdev_nvme_attach_controller" 00:30:57.441 }, 00:30:57.441 { 00:30:57.441 "method": "bdev_wait_for_examine" 00:30:57.441 } 00:30:57.441 ] 00:30:57.441 } 00:30:57.441 ] 00:30:57.441 } 00:30:57.441 [2024-04-24 00:44:51.052216] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:57.441 [2024-04-24 00:44:51.052427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143327 ] 00:30:57.441 [2024-04-24 00:44:51.227349] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.698 [2024-04-24 00:44:51.456539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.678  Copying: 48/48 [kB] (average 46 MBps) 00:30:59.678 00:30:59.678 00:44:53 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:30:59.678 00:44:53 -- dd/basic_rw.sh@37 -- # gen_conf 00:30:59.678 00:44:53 -- dd/common.sh@31 -- # xtrace_disable 00:30:59.678 00:44:53 -- common/autotest_common.sh@10 -- # set +x 00:30:59.678 { 00:30:59.678 "subsystems": [ 00:30:59.678 { 00:30:59.678 "subsystem": "bdev", 00:30:59.678 "config": [ 00:30:59.678 { 00:30:59.678 "params": { 00:30:59.678 "trtype": "pcie", 00:30:59.678 "traddr": "0000:00:10.0", 00:30:59.678 "name": "Nvme0" 00:30:59.679 }, 00:30:59.679 "method": "bdev_nvme_attach_controller" 00:30:59.679 }, 00:30:59.679 { 00:30:59.679 "method": "bdev_wait_for_examine" 00:30:59.679 } 00:30:59.679 ] 00:30:59.679 } 00:30:59.679 ] 00:30:59.679 } 00:30:59.679 [2024-04-24 00:44:53.236993] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:30:59.679 [2024-04-24 00:44:53.237189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143355 ] 00:30:59.679 [2024-04-24 00:44:53.410279] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.937 [2024-04-24 00:44:53.649871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.876  Copying: 48/48 [kB] (average 46 MBps) 00:31:01.876 00:31:01.876 00:44:55 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:01.876 00:44:55 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:31:01.876 00:44:55 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:31:01.876 00:44:55 -- dd/common.sh@11 -- # local nvme_ref= 00:31:01.876 00:44:55 -- dd/common.sh@12 -- # local size=49152 00:31:01.876 00:44:55 -- dd/common.sh@14 -- # local bs=1048576 00:31:01.876 00:44:55 -- dd/common.sh@15 -- # local count=1 00:31:01.876 00:44:55 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:31:01.876 00:44:55 -- dd/common.sh@18 -- # gen_conf 00:31:01.876 00:44:55 -- dd/common.sh@31 -- # xtrace_disable 00:31:01.876 00:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:01.876 { 00:31:01.876 "subsystems": [ 00:31:01.876 { 00:31:01.876 "subsystem": "bdev", 00:31:01.876 "config": [ 00:31:01.876 { 00:31:01.876 "params": { 00:31:01.876 "trtype": "pcie", 00:31:01.876 "traddr": "0000:00:10.0", 00:31:01.876 "name": "Nvme0" 00:31:01.876 }, 00:31:01.876 "method": "bdev_nvme_attach_controller" 00:31:01.876 }, 00:31:01.876 { 00:31:01.876 "method": "bdev_wait_for_examine" 00:31:01.876 } 00:31:01.876 ] 00:31:01.876 } 00:31:01.876 ] 00:31:01.876 } 00:31:01.876 [2024-04-24 00:44:55.629315] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:01.876 [2024-04-24 00:44:55.629501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143390 ] 00:31:02.134 [2024-04-24 00:44:55.808726] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.396 [2024-04-24 00:44:56.035994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.055  Copying: 1024/1024 [kB] (average 1000 MBps) 00:31:04.055 00:31:04.055 00:44:57 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:31:04.055 00:44:57 -- dd/basic_rw.sh@23 -- # count=3 00:31:04.055 00:44:57 -- dd/basic_rw.sh@24 -- # count=3 00:31:04.055 00:44:57 -- dd/basic_rw.sh@25 -- # size=49152 00:31:04.055 00:44:57 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:31:04.055 00:44:57 -- dd/common.sh@98 -- # xtrace_disable 00:31:04.055 00:44:57 -- common/autotest_common.sh@10 -- # set +x 00:31:04.667 00:44:58 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:31:04.668 00:44:58 -- dd/basic_rw.sh@30 -- # gen_conf 00:31:04.668 00:44:58 -- dd/common.sh@31 -- # xtrace_disable 00:31:04.668 00:44:58 -- common/autotest_common.sh@10 -- # set +x 00:31:04.668 { 00:31:04.668 "subsystems": [ 00:31:04.668 { 00:31:04.668 "subsystem": "bdev", 00:31:04.668 "config": [ 00:31:04.668 { 00:31:04.668 "params": { 00:31:04.668 "trtype": "pcie", 00:31:04.668 "traddr": "0000:00:10.0", 00:31:04.668 "name": "Nvme0" 00:31:04.668 }, 00:31:04.668 "method": "bdev_nvme_attach_controller" 00:31:04.668 }, 00:31:04.668 { 00:31:04.668 "method": "bdev_wait_for_examine" 00:31:04.668 } 00:31:04.668 ] 00:31:04.668 } 00:31:04.668 ] 00:31:04.668 } 00:31:04.668 [2024-04-24 00:44:58.389035] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:04.668 [2024-04-24 00:44:58.389246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143429 ] 00:31:04.924 [2024-04-24 00:44:58.570365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.182 [2024-04-24 00:44:58.840404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.185  Copying: 48/48 [kB] (average 46 MBps) 00:31:07.185 00:31:07.185 00:45:00 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:31:07.185 00:45:00 -- dd/basic_rw.sh@37 -- # gen_conf 00:31:07.185 00:45:00 -- dd/common.sh@31 -- # xtrace_disable 00:31:07.185 00:45:00 -- common/autotest_common.sh@10 -- # set +x 00:31:07.185 { 00:31:07.185 "subsystems": [ 00:31:07.185 { 00:31:07.185 "subsystem": "bdev", 00:31:07.185 "config": [ 00:31:07.185 { 00:31:07.185 "params": { 00:31:07.185 "trtype": "pcie", 00:31:07.185 "traddr": "0000:00:10.0", 00:31:07.185 "name": "Nvme0" 00:31:07.185 }, 00:31:07.185 "method": "bdev_nvme_attach_controller" 00:31:07.185 }, 00:31:07.185 { 00:31:07.185 "method": "bdev_wait_for_examine" 00:31:07.185 } 00:31:07.185 ] 00:31:07.185 } 00:31:07.185 ] 00:31:07.185 } 00:31:07.185 [2024-04-24 00:45:00.733868] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:07.185 [2024-04-24 00:45:00.734054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143461 ] 00:31:07.185 [2024-04-24 00:45:00.914331] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.443 [2024-04-24 00:45:01.214263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.427  Copying: 48/48 [kB] (average 46 MBps) 00:31:09.427 00:31:09.427 00:45:02 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:09.427 00:45:02 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:31:09.427 00:45:02 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:31:09.427 00:45:02 -- dd/common.sh@11 -- # local nvme_ref= 00:31:09.427 00:45:02 -- dd/common.sh@12 -- # local size=49152 00:31:09.427 00:45:02 -- dd/common.sh@14 -- # local bs=1048576 00:31:09.427 00:45:02 -- dd/common.sh@15 -- # local count=1 00:31:09.427 00:45:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:31:09.427 00:45:02 -- dd/common.sh@18 -- # gen_conf 00:31:09.427 00:45:02 -- dd/common.sh@31 -- # xtrace_disable 00:31:09.427 00:45:02 -- common/autotest_common.sh@10 -- # set +x 00:31:09.427 { 00:31:09.427 "subsystems": [ 00:31:09.427 { 00:31:09.427 "subsystem": "bdev", 00:31:09.427 "config": [ 00:31:09.427 { 00:31:09.427 "params": { 00:31:09.427 "trtype": "pcie", 00:31:09.427 "traddr": "0000:00:10.0", 00:31:09.427 "name": "Nvme0" 00:31:09.427 }, 00:31:09.427 "method": "bdev_nvme_attach_controller" 00:31:09.427 }, 00:31:09.427 { 00:31:09.427 "method": "bdev_wait_for_examine" 00:31:09.427 } 00:31:09.427 ] 00:31:09.427 } 00:31:09.427 ] 00:31:09.427 } 00:31:09.427 [2024-04-24 00:45:03.001727] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:09.427 [2024-04-24 00:45:03.001881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143493 ] 00:31:09.427 [2024-04-24 00:45:03.171778] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.685 [2024-04-24 00:45:03.451082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.630  Copying: 1024/1024 [kB] (average 500 MBps) 00:31:11.630 00:31:11.630 00:31:11.630 real 0m44.500s 00:31:11.630 user 0m38.159s 00:31:11.630 sys 0m5.057s 00:31:11.630 00:45:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:11.630 ************************************ 00:31:11.630 END TEST dd_rw 00:31:11.630 ************************************ 00:31:11.630 00:45:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.630 00:45:05 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:31:11.630 00:45:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:11.630 00:45:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:11.630 00:45:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.630 ************************************ 00:31:11.630 START TEST dd_rw_offset 00:31:11.630 ************************************ 00:31:11.630 00:45:05 -- common/autotest_common.sh@1111 -- # basic_offset 00:31:11.630 00:45:05 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:31:11.630 00:45:05 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:31:11.630 00:45:05 -- dd/common.sh@98 -- # xtrace_disable 00:31:11.630 00:45:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.630 00:45:05 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:31:11.630 00:45:05 -- dd/basic_rw.sh@56 -- # data=hxegtcapqoj64dim9uwpvtvcv94i3y9l8cc7tws461t0f81axbbl2ejmv37mru1femohi99xbnaw4x8gyjz5x3groupb4v49nxfg98gqay8zxcugi5fcwakkkprgt5mcr3zisfztc6tlny5vpo7zl8u3001nox3z3zdphjmghl4lmzxsfk0orbc2kj2px2mw9di0cjalihh603yizkba7vfkl8vnu77m0gu4ug7si4kdm5mjga1h1qouq47ug5zez2g77qdop5hw7r0kzqfz2qdjeja7qwwfp6k2gayngrkzwiy304l27ndgljfbnfcz84nnt8n5z7hp7mv028sklkb8qhohtoprenyl6bxjs1ur7n2au8i55pqvq6x2t747xalfh7p17247jepkf5m6w6r2b4mrgc9p5hnmmq2ffjqfv51xzur2450yasxhiqot2tsic53m5t5lt6grfpt14v6292as8zvtq5xb9zsl6ugfu8hckmi4idrx6owt2lmz2lrayzywy2rwt5c9hmsw9mifzv5grhpyfy6wte5cam1v30tdewv1d01cyboo36b91mjctyw1pyu9f9gefz1n1hx1pd23uqnrwovxox57ubhefx5loo4eysej36kqf8s4xp294lshk1dwx8i8biqgrp83q7smwpqtnv1bmnare4gec1cneb0wge4nic6wbfnxupxz2fswl4r6qys8c3b38hsdzf0ah0xhnncqi3ssvk3w9f98w5l9mdipoth3hge4w9lpta2uom42jiup7rryfc2baw77zekji9zio2ku7z9ljecmt7exhl5jtk7vkm4lhoesdl3jkrd5r46w0n8my2ocfjo3uaj0szu4i4dotgzf7or3sxcc3lvyoh0zi3gnh0fcfqbtv50t92mhcz4qpy5wswo4zk7vh7bhiqig67ip3jzjef8euh8qluqrkjj4xxmrxdlw5pmdc5wb2r12rpah2adgpjfxek66ywwtj7bkhewtlag9xvqnyqb4yw2ckiwvlqaiz8z3txal66l8p5m1arkfsnsgjuyoregl3eazz9qqf83mulmpuu2u6tau0hq9hept21354det1hz9vk3kfcwogqgjdtfsq1wz2rcxr2hkntosgmrozgnc6opy4bivqgw82bnqcuwds65hw4nstcdlvztpd1tslx8dxybwy3zwu7ajejm7qjoofdtq1rpl027u4phac5btv9hmtvo18rxdns9wp3h4pdpi7agtf9qcalvjze262zflu815in6gze98oaavt3wqxsuis3csf2qylao9vqb9uh5ft3ifmcpixavq31jcqawffmc9yf4l1mp6guh8hfomhscw1i3a57yt1gd0dj67frw0rgd1ncx0p3e6dvprxj69d85zzj2grsd2wyhwgs3nwm2hyg9ph5fpgc4swhfes8yjrhxdq17kjsib923mkni8l4qkq2gg18c05izfojpdrk10tsejqtcwszg305rp84nnnp0be7mtn4cmc7w6bsgrmenzd1e6flzdofqvj419p0x7jrfrbp762q0cxzqgdi4iofmjus7rvto2n8xuwfjtzrtbw60e8p3ekz81tjd4olfwkxes7dnem5xgkw6rncdyuqoaijelfoiblha51x1u5hwze0fdu4t15dol8hcutw01l0yze9iub54g70ysp9ookylh5el8t9eus0yl1lxsuryv9oeq53ywpczjmdna5pudodvet98oxlozy76npq9tuzrj9zvxzb84ja593bui5xjqfcmg3u82wqjjikrdn0tonfyh2o4hqkaaaw96czsajguooxn88029ufguv3iuepwwwjlxu0nlvvp8cd3vznka55w01sqv20b5x3gpx1dvylrompi5d3j1such97qudautaucxxxhdgh496ij1rgzfn8hn059ni3mes5wbq5nsiyvwcystxmpak1i4swjxru76w4nsez4cwom988kdav671nt3vk5i1oa7f4d2v04vkvpfbr9cnfidm65r0g96he98mugfl0g28i33zi1yku16b4t9fwx8xnyecuvdt80o4bnfk6y27xqnoafdkmvps6eir7leb8xk48jjjhjmcljkzioltsdih98x55owa2u95e81t9vkjkks8x0qzldvb6i366sy2w1jdqzskjqe7e6v6zhbewpl6yobqg1reaoa8r9be3gcxebl706gsciot25xjrdtgqq0i6y9hadvnan3ov0i7xf13shfzu735e0d567zdviu16t4c3ch7vx15k0wq6nrz9mlnknifmmy5r58crgmut9hzzedhowaxd7nxnoxtsk3fi89mlbazwy92ywkoowv43mfh7gndjxq4b5d1nwpn0k0a5i43x4hgs2b5472n3lrx9pqsv19021ddg4r6rup6makvwmyqsro0tds7obkxxx9hk0mv3jywzzgz3xoj367rj5vf7z5we249vn3g2kfmglglotxzlsbe17sh4njvgi8mm8tm8hacg399kdcqjs3sequdxttv70d7q88hjkth4nbpml6h1952r0thnfptohl7kmdj8hipezzdyhaysng1aalavqmhn2d5fp75dqe79yqs4xp380467f5sb0uh41vwlpawke4ih6drcutzeuka585pft06s9y8bfj54srx4dodx3js557gqcwz5xjm3pq5zrb3e8y0pchfgz80zjdk2hf40my5c1l4fueeic5v3p5r0ewvowvbjqb01lzjd67n7llxs6ftp0s9apdop93airmcoxl8ujrqekgtey8jhba1acros3722dji91hsgsv7j2agu3mlo0y4nccv19o1hk7uj71613pmkrbnk2nqwpczz5ri3j1a1rh4x8pam9enkwbzo14wtv3hw1vneprml9ecnnmme0axibjhg4qc9d1ekuwha5k7j1brc1fg3bebzjco7r7ua2cyah8eg524nn7jj5xbypg4iy3e48f67yzdwmyqcs844slaoq4ktxg4w9uyosc16ab0v821jb6ey2yu59uvebeniqvs6ggxtd6noa3gls9so83pa4miz2dtn1s236i8d0jsd2he6iu7z6h4icx9xublrx85h1ofix55admmrypbthvgt5d6c5lmotqsbm79u5m4keadaw7ec7nn3af046rdhom4l72mkb5kg0qaxac1gta3fcyxo2hzdhhkyam6njmxdvd1y3dgiq97bb887x5sw4crv1yp8p9xtgiduva5kdvbc3upfvv6fva9930n9ug1xhnc6h0s4uvs2k33u0e8v2z7is69tb9x8mf4kdp1we9gb00luikbpwnilz000hmlktn6lzzoymc097g1jylep21qn85kp3npcq66jznxhjq5w2p8eund4uuax25t1v2r5a3rn8ulvy5oueuc0utwsi14gvzy4uz7mylfl2rxmtoym5aqdtkc1ptay95a4olcksizavvvnapcv3i3xw13g6wv5ll5dx10t7svz28d4mmxadsxf0u7xqr8a4emb5jivrnhscm76h8sj0389qp54gcrrm5qsl6bx4tw36xobwcepvz7aco2y3dg8azr8953dwq7gldgi79vzeedmxfxix5hwjk0aqgbree8srhhej3w5gi8rdhpcllk6p8sse4ynnb1o9xlozetv0xynmarkz9h7m64650sx45i011ighkymufkeassbrwwcrcb709wm4w97qrq0x2dqwqz3l9bqujs7v8bsbm58m014vl620ce3h2o7cjdhiyiiuwf8ml5tqci3fgpuxp75d4hff5bd3u8gutvnkrcjbvgtikvv73tr46t5ytieyxt531d4zu5jtg9fhot21w9lguu0bs0pwidu28qpamo9sqd4ofs5zz1pzbfxy0hzkocbrwa1h29v5nmelx97xmsbc6mw632ogfjq5h1y1zpbwmpjabkjluif7hpyh8qinx6i6nrhubyiefmb6nhmaeqdsvwsxrdzboa63xpyk1qq46fr8yq77se7rae7vtq4syc0skvidyhetjtmxsbq1b3pnkbc3agip5 00:31:11.630 00:45:05 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:31:11.630 00:45:05 -- dd/basic_rw.sh@59 -- # gen_conf 00:31:11.630 00:45:05 -- dd/common.sh@31 -- # xtrace_disable 00:31:11.630 00:45:05 -- common/autotest_common.sh@10 -- # set +x 00:31:11.889 { 00:31:11.889 "subsystems": [ 00:31:11.889 { 00:31:11.889 "subsystem": "bdev", 00:31:11.889 "config": [ 00:31:11.889 { 00:31:11.889 "params": { 00:31:11.889 "trtype": "pcie", 00:31:11.889 "traddr": "0000:00:10.0", 00:31:11.889 "name": "Nvme0" 00:31:11.889 }, 00:31:11.889 "method": "bdev_nvme_attach_controller" 00:31:11.889 }, 00:31:11.889 { 00:31:11.889 "method": "bdev_wait_for_examine" 00:31:11.889 } 00:31:11.889 ] 00:31:11.889 } 00:31:11.889 ] 00:31:11.889 } 00:31:11.889 [2024-04-24 00:45:05.464595] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:11.889 [2024-04-24 00:45:05.464794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143554 ] 00:31:11.889 [2024-04-24 00:45:05.644241] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.147 [2024-04-24 00:45:05.861797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.088  Copying: 4096/4096 [B] (average 4000 kBps) 00:31:14.088 00:31:14.088 00:45:07 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:31:14.088 00:45:07 -- dd/basic_rw.sh@65 -- # gen_conf 00:31:14.088 00:45:07 -- dd/common.sh@31 -- # xtrace_disable 00:31:14.088 00:45:07 -- common/autotest_common.sh@10 -- # set +x 00:31:14.088 { 00:31:14.088 "subsystems": [ 00:31:14.088 { 00:31:14.088 "subsystem": "bdev", 00:31:14.088 "config": [ 00:31:14.088 { 00:31:14.088 "params": { 00:31:14.088 "trtype": "pcie", 00:31:14.088 "traddr": "0000:00:10.0", 00:31:14.088 "name": "Nvme0" 00:31:14.088 }, 00:31:14.088 "method": "bdev_nvme_attach_controller" 00:31:14.088 }, 00:31:14.088 { 00:31:14.088 "method": "bdev_wait_for_examine" 00:31:14.088 } 00:31:14.088 ] 00:31:14.088 } 00:31:14.088 ] 00:31:14.088 } 00:31:14.088 [2024-04-24 00:45:07.546691] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:14.088 [2024-04-24 00:45:07.546818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143590 ] 00:31:14.088 [2024-04-24 00:45:07.714256] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.347 [2024-04-24 00:45:07.933466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.052  Copying: 4096/4096 [B] (average 4000 kBps) 00:31:16.052 00:31:16.052 00:45:09 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:31:16.052 00:45:09 -- dd/basic_rw.sh@72 -- # [[ hxegtcapqoj64dim9uwpvtvcv94i3y9l8cc7tws461t0f81axbbl2ejmv37mru1femohi99xbnaw4x8gyjz5x3groupb4v49nxfg98gqay8zxcugi5fcwakkkprgt5mcr3zisfztc6tlny5vpo7zl8u3001nox3z3zdphjmghl4lmzxsfk0orbc2kj2px2mw9di0cjalihh603yizkba7vfkl8vnu77m0gu4ug7si4kdm5mjga1h1qouq47ug5zez2g77qdop5hw7r0kzqfz2qdjeja7qwwfp6k2gayngrkzwiy304l27ndgljfbnfcz84nnt8n5z7hp7mv028sklkb8qhohtoprenyl6bxjs1ur7n2au8i55pqvq6x2t747xalfh7p17247jepkf5m6w6r2b4mrgc9p5hnmmq2ffjqfv51xzur2450yasxhiqot2tsic53m5t5lt6grfpt14v6292as8zvtq5xb9zsl6ugfu8hckmi4idrx6owt2lmz2lrayzywy2rwt5c9hmsw9mifzv5grhpyfy6wte5cam1v30tdewv1d01cyboo36b91mjctyw1pyu9f9gefz1n1hx1pd23uqnrwovxox57ubhefx5loo4eysej36kqf8s4xp294lshk1dwx8i8biqgrp83q7smwpqtnv1bmnare4gec1cneb0wge4nic6wbfnxupxz2fswl4r6qys8c3b38hsdzf0ah0xhnncqi3ssvk3w9f98w5l9mdipoth3hge4w9lpta2uom42jiup7rryfc2baw77zekji9zio2ku7z9ljecmt7exhl5jtk7vkm4lhoesdl3jkrd5r46w0n8my2ocfjo3uaj0szu4i4dotgzf7or3sxcc3lvyoh0zi3gnh0fcfqbtv50t92mhcz4qpy5wswo4zk7vh7bhiqig67ip3jzjef8euh8qluqrkjj4xxmrxdlw5pmdc5wb2r12rpah2adgpjfxek66ywwtj7bkhewtlag9xvqnyqb4yw2ckiwvlqaiz8z3txal66l8p5m1arkfsnsgjuyoregl3eazz9qqf83mulmpuu2u6tau0hq9hept21354det1hz9vk3kfcwogqgjdtfsq1wz2rcxr2hkntosgmrozgnc6opy4bivqgw82bnqcuwds65hw4nstcdlvztpd1tslx8dxybwy3zwu7ajejm7qjoofdtq1rpl027u4phac5btv9hmtvo18rxdns9wp3h4pdpi7agtf9qcalvjze262zflu815in6gze98oaavt3wqxsuis3csf2qylao9vqb9uh5ft3ifmcpixavq31jcqawffmc9yf4l1mp6guh8hfomhscw1i3a57yt1gd0dj67frw0rgd1ncx0p3e6dvprxj69d85zzj2grsd2wyhwgs3nwm2hyg9ph5fpgc4swhfes8yjrhxdq17kjsib923mkni8l4qkq2gg18c05izfojpdrk10tsejqtcwszg305rp84nnnp0be7mtn4cmc7w6bsgrmenzd1e6flzdofqvj419p0x7jrfrbp762q0cxzqgdi4iofmjus7rvto2n8xuwfjtzrtbw60e8p3ekz81tjd4olfwkxes7dnem5xgkw6rncdyuqoaijelfoiblha51x1u5hwze0fdu4t15dol8hcutw01l0yze9iub54g70ysp9ookylh5el8t9eus0yl1lxsuryv9oeq53ywpczjmdna5pudodvet98oxlozy76npq9tuzrj9zvxzb84ja593bui5xjqfcmg3u82wqjjikrdn0tonfyh2o4hqkaaaw96czsajguooxn88029ufguv3iuepwwwjlxu0nlvvp8cd3vznka55w01sqv20b5x3gpx1dvylrompi5d3j1such97qudautaucxxxhdgh496ij1rgzfn8hn059ni3mes5wbq5nsiyvwcystxmpak1i4swjxru76w4nsez4cwom988kdav671nt3vk5i1oa7f4d2v04vkvpfbr9cnfidm65r0g96he98mugfl0g28i33zi1yku16b4t9fwx8xnyecuvdt80o4bnfk6y27xqnoafdkmvps6eir7leb8xk48jjjhjmcljkzioltsdih98x55owa2u95e81t9vkjkks8x0qzldvb6i366sy2w1jdqzskjqe7e6v6zhbewpl6yobqg1reaoa8r9be3gcxebl706gsciot25xjrdtgqq0i6y9hadvnan3ov0i7xf13shfzu735e0d567zdviu16t4c3ch7vx15k0wq6nrz9mlnknifmmy5r58crgmut9hzzedhowaxd7nxnoxtsk3fi89mlbazwy92ywkoowv43mfh7gndjxq4b5d1nwpn0k0a5i43x4hgs2b5472n3lrx9pqsv19021ddg4r6rup6makvwmyqsro0tds7obkxxx9hk0mv3jywzzgz3xoj367rj5vf7z5we249vn3g2kfmglglotxzlsbe17sh4njvgi8mm8tm8hacg399kdcqjs3sequdxttv70d7q88hjkth4nbpml6h1952r0thnfptohl7kmdj8hipezzdyhaysng1aalavqmhn2d5fp75dqe79yqs4xp380467f5sb0uh41vwlpawke4ih6drcutzeuka585pft06s9y8bfj54srx4dodx3js557gqcwz5xjm3pq5zrb3e8y0pchfgz80zjdk2hf40my5c1l4fueeic5v3p5r0ewvowvbjqb01lzjd67n7llxs6ftp0s9apdop93airmcoxl8ujrqekgtey8jhba1acros3722dji91hsgsv7j2agu3mlo0y4nccv19o1hk7uj71613pmkrbnk2nqwpczz5ri3j1a1rh4x8pam9enkwbzo14wtv3hw1vneprml9ecnnmme0axibjhg4qc9d1ekuwha5k7j1brc1fg3bebzjco7r7ua2cyah8eg524nn7jj5xbypg4iy3e48f67yzdwmyqcs844slaoq4ktxg4w9uyosc16ab0v821jb6ey2yu59uvebeniqvs6ggxtd6noa3gls9so83pa4miz2dtn1s236i8d0jsd2he6iu7z6h4icx9xublrx85h1ofix55admmrypbthvgt5d6c5lmotqsbm79u5m4keadaw7ec7nn3af046rdhom4l72mkb5kg0qaxac1gta3fcyxo2hzdhhkyam6njmxdvd1y3dgiq97bb887x5sw4crv1yp8p9xtgiduva5kdvbc3upfvv6fva9930n9ug1xhnc6h0s4uvs2k33u0e8v2z7is69tb9x8mf4kdp1we9gb00luikbpwnilz000hmlktn6lzzoymc097g1jylep21qn85kp3npcq66jznxhjq5w2p8eund4uuax25t1v2r5a3rn8ulvy5oueuc0utwsi14gvzy4uz7mylfl2rxmtoym5aqdtkc1ptay95a4olcksizavvvnapcv3i3xw13g6wv5ll5dx10t7svz28d4mmxadsxf0u7xqr8a4emb5jivrnhscm76h8sj0389qp54gcrrm5qsl6bx4tw36xobwcepvz7aco2y3dg8azr8953dwq7gldgi79vzeedmxfxix5hwjk0aqgbree8srhhej3w5gi8rdhpcllk6p8sse4ynnb1o9xlozetv0xynmarkz9h7m64650sx45i011ighkymufkeassbrwwcrcb709wm4w97qrq0x2dqwqz3l9bqujs7v8bsbm58m014vl620ce3h2o7cjdhiyiiuwf8ml5tqci3fgpuxp75d4hff5bd3u8gutvnkrcjbvgtikvv73tr46t5ytieyxt531d4zu5jtg9fhot21w9lguu0bs0pwidu28qpamo9sqd4ofs5zz1pzbfxy0hzkocbrwa1h29v5nmelx97xmsbc6mw632ogfjq5h1y1zpbwmpjabkjluif7hpyh8qinx6i6nrhubyiefmb6nhmaeqdsvwsxrdzboa63xpyk1qq46fr8yq77se7rae7vtq4syc0skvidyhetjtmxsbq1b3pnkbc3agip5 == \h\x\e\g\t\c\a\p\q\o\j\6\4\d\i\m\9\u\w\p\v\t\v\c\v\9\4\i\3\y\9\l\8\c\c\7\t\w\s\4\6\1\t\0\f\8\1\a\x\b\b\l\2\e\j\m\v\3\7\m\r\u\1\f\e\m\o\h\i\9\9\x\b\n\a\w\4\x\8\g\y\j\z\5\x\3\g\r\o\u\p\b\4\v\4\9\n\x\f\g\9\8\g\q\a\y\8\z\x\c\u\g\i\5\f\c\w\a\k\k\k\p\r\g\t\5\m\c\r\3\z\i\s\f\z\t\c\6\t\l\n\y\5\v\p\o\7\z\l\8\u\3\0\0\1\n\o\x\3\z\3\z\d\p\h\j\m\g\h\l\4\l\m\z\x\s\f\k\0\o\r\b\c\2\k\j\2\p\x\2\m\w\9\d\i\0\c\j\a\l\i\h\h\6\0\3\y\i\z\k\b\a\7\v\f\k\l\8\v\n\u\7\7\m\0\g\u\4\u\g\7\s\i\4\k\d\m\5\m\j\g\a\1\h\1\q\o\u\q\4\7\u\g\5\z\e\z\2\g\7\7\q\d\o\p\5\h\w\7\r\0\k\z\q\f\z\2\q\d\j\e\j\a\7\q\w\w\f\p\6\k\2\g\a\y\n\g\r\k\z\w\i\y\3\0\4\l\2\7\n\d\g\l\j\f\b\n\f\c\z\8\4\n\n\t\8\n\5\z\7\h\p\7\m\v\0\2\8\s\k\l\k\b\8\q\h\o\h\t\o\p\r\e\n\y\l\6\b\x\j\s\1\u\r\7\n\2\a\u\8\i\5\5\p\q\v\q\6\x\2\t\7\4\7\x\a\l\f\h\7\p\1\7\2\4\7\j\e\p\k\f\5\m\6\w\6\r\2\b\4\m\r\g\c\9\p\5\h\n\m\m\q\2\f\f\j\q\f\v\5\1\x\z\u\r\2\4\5\0\y\a\s\x\h\i\q\o\t\2\t\s\i\c\5\3\m\5\t\5\l\t\6\g\r\f\p\t\1\4\v\6\2\9\2\a\s\8\z\v\t\q\5\x\b\9\z\s\l\6\u\g\f\u\8\h\c\k\m\i\4\i\d\r\x\6\o\w\t\2\l\m\z\2\l\r\a\y\z\y\w\y\2\r\w\t\5\c\9\h\m\s\w\9\m\i\f\z\v\5\g\r\h\p\y\f\y\6\w\t\e\5\c\a\m\1\v\3\0\t\d\e\w\v\1\d\0\1\c\y\b\o\o\3\6\b\9\1\m\j\c\t\y\w\1\p\y\u\9\f\9\g\e\f\z\1\n\1\h\x\1\p\d\2\3\u\q\n\r\w\o\v\x\o\x\5\7\u\b\h\e\f\x\5\l\o\o\4\e\y\s\e\j\3\6\k\q\f\8\s\4\x\p\2\9\4\l\s\h\k\1\d\w\x\8\i\8\b\i\q\g\r\p\8\3\q\7\s\m\w\p\q\t\n\v\1\b\m\n\a\r\e\4\g\e\c\1\c\n\e\b\0\w\g\e\4\n\i\c\6\w\b\f\n\x\u\p\x\z\2\f\s\w\l\4\r\6\q\y\s\8\c\3\b\3\8\h\s\d\z\f\0\a\h\0\x\h\n\n\c\q\i\3\s\s\v\k\3\w\9\f\9\8\w\5\l\9\m\d\i\p\o\t\h\3\h\g\e\4\w\9\l\p\t\a\2\u\o\m\4\2\j\i\u\p\7\r\r\y\f\c\2\b\a\w\7\7\z\e\k\j\i\9\z\i\o\2\k\u\7\z\9\l\j\e\c\m\t\7\e\x\h\l\5\j\t\k\7\v\k\m\4\l\h\o\e\s\d\l\3\j\k\r\d\5\r\4\6\w\0\n\8\m\y\2\o\c\f\j\o\3\u\a\j\0\s\z\u\4\i\4\d\o\t\g\z\f\7\o\r\3\s\x\c\c\3\l\v\y\o\h\0\z\i\3\g\n\h\0\f\c\f\q\b\t\v\5\0\t\9\2\m\h\c\z\4\q\p\y\5\w\s\w\o\4\z\k\7\v\h\7\b\h\i\q\i\g\6\7\i\p\3\j\z\j\e\f\8\e\u\h\8\q\l\u\q\r\k\j\j\4\x\x\m\r\x\d\l\w\5\p\m\d\c\5\w\b\2\r\1\2\r\p\a\h\2\a\d\g\p\j\f\x\e\k\6\6\y\w\w\t\j\7\b\k\h\e\w\t\l\a\g\9\x\v\q\n\y\q\b\4\y\w\2\c\k\i\w\v\l\q\a\i\z\8\z\3\t\x\a\l\6\6\l\8\p\5\m\1\a\r\k\f\s\n\s\g\j\u\y\o\r\e\g\l\3\e\a\z\z\9\q\q\f\8\3\m\u\l\m\p\u\u\2\u\6\t\a\u\0\h\q\9\h\e\p\t\2\1\3\5\4\d\e\t\1\h\z\9\v\k\3\k\f\c\w\o\g\q\g\j\d\t\f\s\q\1\w\z\2\r\c\x\r\2\h\k\n\t\o\s\g\m\r\o\z\g\n\c\6\o\p\y\4\b\i\v\q\g\w\8\2\b\n\q\c\u\w\d\s\6\5\h\w\4\n\s\t\c\d\l\v\z\t\p\d\1\t\s\l\x\8\d\x\y\b\w\y\3\z\w\u\7\a\j\e\j\m\7\q\j\o\o\f\d\t\q\1\r\p\l\0\2\7\u\4\p\h\a\c\5\b\t\v\9\h\m\t\v\o\1\8\r\x\d\n\s\9\w\p\3\h\4\p\d\p\i\7\a\g\t\f\9\q\c\a\l\v\j\z\e\2\6\2\z\f\l\u\8\1\5\i\n\6\g\z\e\9\8\o\a\a\v\t\3\w\q\x\s\u\i\s\3\c\s\f\2\q\y\l\a\o\9\v\q\b\9\u\h\5\f\t\3\i\f\m\c\p\i\x\a\v\q\3\1\j\c\q\a\w\f\f\m\c\9\y\f\4\l\1\m\p\6\g\u\h\8\h\f\o\m\h\s\c\w\1\i\3\a\5\7\y\t\1\g\d\0\d\j\6\7\f\r\w\0\r\g\d\1\n\c\x\0\p\3\e\6\d\v\p\r\x\j\6\9\d\8\5\z\z\j\2\g\r\s\d\2\w\y\h\w\g\s\3\n\w\m\2\h\y\g\9\p\h\5\f\p\g\c\4\s\w\h\f\e\s\8\y\j\r\h\x\d\q\1\7\k\j\s\i\b\9\2\3\m\k\n\i\8\l\4\q\k\q\2\g\g\1\8\c\0\5\i\z\f\o\j\p\d\r\k\1\0\t\s\e\j\q\t\c\w\s\z\g\3\0\5\r\p\8\4\n\n\n\p\0\b\e\7\m\t\n\4\c\m\c\7\w\6\b\s\g\r\m\e\n\z\d\1\e\6\f\l\z\d\o\f\q\v\j\4\1\9\p\0\x\7\j\r\f\r\b\p\7\6\2\q\0\c\x\z\q\g\d\i\4\i\o\f\m\j\u\s\7\r\v\t\o\2\n\8\x\u\w\f\j\t\z\r\t\b\w\6\0\e\8\p\3\e\k\z\8\1\t\j\d\4\o\l\f\w\k\x\e\s\7\d\n\e\m\5\x\g\k\w\6\r\n\c\d\y\u\q\o\a\i\j\e\l\f\o\i\b\l\h\a\5\1\x\1\u\5\h\w\z\e\0\f\d\u\4\t\1\5\d\o\l\8\h\c\u\t\w\0\1\l\0\y\z\e\9\i\u\b\5\4\g\7\0\y\s\p\9\o\o\k\y\l\h\5\e\l\8\t\9\e\u\s\0\y\l\1\l\x\s\u\r\y\v\9\o\e\q\5\3\y\w\p\c\z\j\m\d\n\a\5\p\u\d\o\d\v\e\t\9\8\o\x\l\o\z\y\7\6\n\p\q\9\t\u\z\r\j\9\z\v\x\z\b\8\4\j\a\5\9\3\b\u\i\5\x\j\q\f\c\m\g\3\u\8\2\w\q\j\j\i\k\r\d\n\0\t\o\n\f\y\h\2\o\4\h\q\k\a\a\a\w\9\6\c\z\s\a\j\g\u\o\o\x\n\8\8\0\2\9\u\f\g\u\v\3\i\u\e\p\w\w\w\j\l\x\u\0\n\l\v\v\p\8\c\d\3\v\z\n\k\a\5\5\w\0\1\s\q\v\2\0\b\5\x\3\g\p\x\1\d\v\y\l\r\o\m\p\i\5\d\3\j\1\s\u\c\h\9\7\q\u\d\a\u\t\a\u\c\x\x\x\h\d\g\h\4\9\6\i\j\1\r\g\z\f\n\8\h\n\0\5\9\n\i\3\m\e\s\5\w\b\q\5\n\s\i\y\v\w\c\y\s\t\x\m\p\a\k\1\i\4\s\w\j\x\r\u\7\6\w\4\n\s\e\z\4\c\w\o\m\9\8\8\k\d\a\v\6\7\1\n\t\3\v\k\5\i\1\o\a\7\f\4\d\2\v\0\4\v\k\v\p\f\b\r\9\c\n\f\i\d\m\6\5\r\0\g\9\6\h\e\9\8\m\u\g\f\l\0\g\2\8\i\3\3\z\i\1\y\k\u\1\6\b\4\t\9\f\w\x\8\x\n\y\e\c\u\v\d\t\8\0\o\4\b\n\f\k\6\y\2\7\x\q\n\o\a\f\d\k\m\v\p\s\6\e\i\r\7\l\e\b\8\x\k\4\8\j\j\j\h\j\m\c\l\j\k\z\i\o\l\t\s\d\i\h\9\8\x\5\5\o\w\a\2\u\9\5\e\8\1\t\9\v\k\j\k\k\s\8\x\0\q\z\l\d\v\b\6\i\3\6\6\s\y\2\w\1\j\d\q\z\s\k\j\q\e\7\e\6\v\6\z\h\b\e\w\p\l\6\y\o\b\q\g\1\r\e\a\o\a\8\r\9\b\e\3\g\c\x\e\b\l\7\0\6\g\s\c\i\o\t\2\5\x\j\r\d\t\g\q\q\0\i\6\y\9\h\a\d\v\n\a\n\3\o\v\0\i\7\x\f\1\3\s\h\f\z\u\7\3\5\e\0\d\5\6\7\z\d\v\i\u\1\6\t\4\c\3\c\h\7\v\x\1\5\k\0\w\q\6\n\r\z\9\m\l\n\k\n\i\f\m\m\y\5\r\5\8\c\r\g\m\u\t\9\h\z\z\e\d\h\o\w\a\x\d\7\n\x\n\o\x\t\s\k\3\f\i\8\9\m\l\b\a\z\w\y\9\2\y\w\k\o\o\w\v\4\3\m\f\h\7\g\n\d\j\x\q\4\b\5\d\1\n\w\p\n\0\k\0\a\5\i\4\3\x\4\h\g\s\2\b\5\4\7\2\n\3\l\r\x\9\p\q\s\v\1\9\0\2\1\d\d\g\4\r\6\r\u\p\6\m\a\k\v\w\m\y\q\s\r\o\0\t\d\s\7\o\b\k\x\x\x\9\h\k\0\m\v\3\j\y\w\z\z\g\z\3\x\o\j\3\6\7\r\j\5\v\f\7\z\5\w\e\2\4\9\v\n\3\g\2\k\f\m\g\l\g\l\o\t\x\z\l\s\b\e\1\7\s\h\4\n\j\v\g\i\8\m\m\8\t\m\8\h\a\c\g\3\9\9\k\d\c\q\j\s\3\s\e\q\u\d\x\t\t\v\7\0\d\7\q\8\8\h\j\k\t\h\4\n\b\p\m\l\6\h\1\9\5\2\r\0\t\h\n\f\p\t\o\h\l\7\k\m\d\j\8\h\i\p\e\z\z\d\y\h\a\y\s\n\g\1\a\a\l\a\v\q\m\h\n\2\d\5\f\p\7\5\d\q\e\7\9\y\q\s\4\x\p\3\8\0\4\6\7\f\5\s\b\0\u\h\4\1\v\w\l\p\a\w\k\e\4\i\h\6\d\r\c\u\t\z\e\u\k\a\5\8\5\p\f\t\0\6\s\9\y\8\b\f\j\5\4\s\r\x\4\d\o\d\x\3\j\s\5\5\7\g\q\c\w\z\5\x\j\m\3\p\q\5\z\r\b\3\e\8\y\0\p\c\h\f\g\z\8\0\z\j\d\k\2\h\f\4\0\m\y\5\c\1\l\4\f\u\e\e\i\c\5\v\3\p\5\r\0\e\w\v\o\w\v\b\j\q\b\0\1\l\z\j\d\6\7\n\7\l\l\x\s\6\f\t\p\0\s\9\a\p\d\o\p\9\3\a\i\r\m\c\o\x\l\8\u\j\r\q\e\k\g\t\e\y\8\j\h\b\a\1\a\c\r\o\s\3\7\2\2\d\j\i\9\1\h\s\g\s\v\7\j\2\a\g\u\3\m\l\o\0\y\4\n\c\c\v\1\9\o\1\h\k\7\u\j\7\1\6\1\3\p\m\k\r\b\n\k\2\n\q\w\p\c\z\z\5\r\i\3\j\1\a\1\r\h\4\x\8\p\a\m\9\e\n\k\w\b\z\o\1\4\w\t\v\3\h\w\1\v\n\e\p\r\m\l\9\e\c\n\n\m\m\e\0\a\x\i\b\j\h\g\4\q\c\9\d\1\e\k\u\w\h\a\5\k\7\j\1\b\r\c\1\f\g\3\b\e\b\z\j\c\o\7\r\7\u\a\2\c\y\a\h\8\e\g\5\2\4\n\n\7\j\j\5\x\b\y\p\g\4\i\y\3\e\4\8\f\6\7\y\z\d\w\m\y\q\c\s\8\4\4\s\l\a\o\q\4\k\t\x\g\4\w\9\u\y\o\s\c\1\6\a\b\0\v\8\2\1\j\b\6\e\y\2\y\u\5\9\u\v\e\b\e\n\i\q\v\s\6\g\g\x\t\d\6\n\o\a\3\g\l\s\9\s\o\8\3\p\a\4\m\i\z\2\d\t\n\1\s\2\3\6\i\8\d\0\j\s\d\2\h\e\6\i\u\7\z\6\h\4\i\c\x\9\x\u\b\l\r\x\8\5\h\1\o\f\i\x\5\5\a\d\m\m\r\y\p\b\t\h\v\g\t\5\d\6\c\5\l\m\o\t\q\s\b\m\7\9\u\5\m\4\k\e\a\d\a\w\7\e\c\7\n\n\3\a\f\0\4\6\r\d\h\o\m\4\l\7\2\m\k\b\5\k\g\0\q\a\x\a\c\1\g\t\a\3\f\c\y\x\o\2\h\z\d\h\h\k\y\a\m\6\n\j\m\x\d\v\d\1\y\3\d\g\i\q\9\7\b\b\8\8\7\x\5\s\w\4\c\r\v\1\y\p\8\p\9\x\t\g\i\d\u\v\a\5\k\d\v\b\c\3\u\p\f\v\v\6\f\v\a\9\9\3\0\n\9\u\g\1\x\h\n\c\6\h\0\s\4\u\v\s\2\k\3\3\u\0\e\8\v\2\z\7\i\s\6\9\t\b\9\x\8\m\f\4\k\d\p\1\w\e\9\g\b\0\0\l\u\i\k\b\p\w\n\i\l\z\0\0\0\h\m\l\k\t\n\6\l\z\z\o\y\m\c\0\9\7\g\1\j\y\l\e\p\2\1\q\n\8\5\k\p\3\n\p\c\q\6\6\j\z\n\x\h\j\q\5\w\2\p\8\e\u\n\d\4\u\u\a\x\2\5\t\1\v\2\r\5\a\3\r\n\8\u\l\v\y\5\o\u\e\u\c\0\u\t\w\s\i\1\4\g\v\z\y\4\u\z\7\m\y\l\f\l\2\r\x\m\t\o\y\m\5\a\q\d\t\k\c\1\p\t\a\y\9\5\a\4\o\l\c\k\s\i\z\a\v\v\v\n\a\p\c\v\3\i\3\x\w\1\3\g\6\w\v\5\l\l\5\d\x\1\0\t\7\s\v\z\2\8\d\4\m\m\x\a\d\s\x\f\0\u\7\x\q\r\8\a\4\e\m\b\5\j\i\v\r\n\h\s\c\m\7\6\h\8\s\j\0\3\8\9\q\p\5\4\g\c\r\r\m\5\q\s\l\6\b\x\4\t\w\3\6\x\o\b\w\c\e\p\v\z\7\a\c\o\2\y\3\d\g\8\a\z\r\8\9\5\3\d\w\q\7\g\l\d\g\i\7\9\v\z\e\e\d\m\x\f\x\i\x\5\h\w\j\k\0\a\q\g\b\r\e\e\8\s\r\h\h\e\j\3\w\5\g\i\8\r\d\h\p\c\l\l\k\6\p\8\s\s\e\4\y\n\n\b\1\o\9\x\l\o\z\e\t\v\0\x\y\n\m\a\r\k\z\9\h\7\m\6\4\6\5\0\s\x\4\5\i\0\1\1\i\g\h\k\y\m\u\f\k\e\a\s\s\b\r\w\w\c\r\c\b\7\0\9\w\m\4\w\9\7\q\r\q\0\x\2\d\q\w\q\z\3\l\9\b\q\u\j\s\7\v\8\b\s\b\m\5\8\m\0\1\4\v\l\6\2\0\c\e\3\h\2\o\7\c\j\d\h\i\y\i\i\u\w\f\8\m\l\5\t\q\c\i\3\f\g\p\u\x\p\7\5\d\4\h\f\f\5\b\d\3\u\8\g\u\t\v\n\k\r\c\j\b\v\g\t\i\k\v\v\7\3\t\r\4\6\t\5\y\t\i\e\y\x\t\5\3\1\d\4\z\u\5\j\t\g\9\f\h\o\t\2\1\w\9\l\g\u\u\0\b\s\0\p\w\i\d\u\2\8\q\p\a\m\o\9\s\q\d\4\o\f\s\5\z\z\1\p\z\b\f\x\y\0\h\z\k\o\c\b\r\w\a\1\h\2\9\v\5\n\m\e\l\x\9\7\x\m\s\b\c\6\m\w\6\3\2\o\g\f\j\q\5\h\1\y\1\z\p\b\w\m\p\j\a\b\k\j\l\u\i\f\7\h\p\y\h\8\q\i\n\x\6\i\6\n\r\h\u\b\y\i\e\f\m\b\6\n\h\m\a\e\q\d\s\v\w\s\x\r\d\z\b\o\a\6\3\x\p\y\k\1\q\q\4\6\f\r\8\y\q\7\7\s\e\7\r\a\e\7\v\t\q\4\s\y\c\0\s\k\v\i\d\y\h\e\t\j\t\m\x\s\b\q\1\b\3\p\n\k\b\c\3\a\g\i\p\5 ]] 00:31:16.052 00:31:16.053 real 0m4.372s 00:31:16.053 user 0m3.679s 00:31:16.053 sys 0m0.556s 00:31:16.053 00:45:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:16.053 00:45:09 -- common/autotest_common.sh@10 -- # set +x 00:31:16.053 ************************************ 00:31:16.053 END TEST dd_rw_offset 00:31:16.053 ************************************ 00:31:16.053 00:45:09 -- dd/basic_rw.sh@1 -- # cleanup 00:31:16.053 00:45:09 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:31:16.053 00:45:09 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:31:16.053 00:45:09 -- dd/common.sh@11 -- # local nvme_ref= 00:31:16.053 00:45:09 -- dd/common.sh@12 -- # local size=0xffff 00:31:16.053 00:45:09 -- dd/common.sh@14 -- # local bs=1048576 00:31:16.053 00:45:09 -- dd/common.sh@15 -- # local count=1 00:31:16.053 00:45:09 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:31:16.053 00:45:09 -- dd/common.sh@18 -- # gen_conf 00:31:16.053 00:45:09 -- dd/common.sh@31 -- # xtrace_disable 00:31:16.053 00:45:09 -- common/autotest_common.sh@10 -- # set +x 00:31:16.053 { 00:31:16.053 "subsystems": [ 00:31:16.053 { 00:31:16.053 "subsystem": "bdev", 00:31:16.053 "config": [ 00:31:16.053 { 00:31:16.053 "params": { 00:31:16.053 "trtype": "pcie", 00:31:16.053 "traddr": "0000:00:10.0", 00:31:16.053 "name": "Nvme0" 00:31:16.053 }, 00:31:16.053 "method": "bdev_nvme_attach_controller" 00:31:16.053 }, 00:31:16.053 { 00:31:16.053 "method": "bdev_wait_for_examine" 00:31:16.053 } 00:31:16.053 ] 00:31:16.053 } 00:31:16.053 ] 00:31:16.053 } 00:31:16.053 [2024-04-24 00:45:09.832141] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:16.053 [2024-04-24 00:45:09.832357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143636 ] 00:31:16.312 [2024-04-24 00:45:10.012567] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.569 [2024-04-24 00:45:10.251759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.510  Copying: 1024/1024 [kB] (average 500 MBps) 00:31:18.510 00:31:18.510 00:45:11 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:18.510 00:31:18.510 real 0m54.090s 00:31:18.510 user 0m46.030s 00:31:18.510 sys 0m6.453s 00:31:18.510 00:45:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:18.510 ************************************ 00:31:18.510 END TEST spdk_dd_basic_rw 00:31:18.510 00:45:11 -- common/autotest_common.sh@10 -- # set +x 00:31:18.510 ************************************ 00:31:18.510 00:45:11 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:31:18.510 00:45:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:18.510 00:45:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:18.510 00:45:11 -- common/autotest_common.sh@10 -- # set +x 00:31:18.510 ************************************ 00:31:18.510 START TEST spdk_dd_posix 00:31:18.510 ************************************ 00:31:18.510 00:45:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:31:18.510 * Looking for test storage... 00:31:18.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:31:18.510 00:45:12 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:18.510 00:45:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.510 00:45:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.510 00:45:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.510 00:45:12 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:18.510 00:45:12 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:18.510 00:45:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:18.510 00:45:12 -- paths/export.sh@5 -- # export PATH 00:31:18.510 00:45:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:18.510 00:45:12 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:31:18.510 00:45:12 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:31:18.510 00:45:12 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:31:18.510 00:45:12 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:31:18.510 00:45:12 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:18.510 00:45:12 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:18.510 00:45:12 -- dd/posix.sh@130 -- # tests 00:31:18.510 00:45:12 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:31:18.510 * First test run, using AIO 00:31:18.510 00:45:12 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:31:18.510 00:45:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:18.510 00:45:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:18.510 00:45:12 -- common/autotest_common.sh@10 -- # set +x 00:31:18.510 ************************************ 00:31:18.510 START TEST dd_flag_append 00:31:18.510 ************************************ 00:31:18.510 00:45:12 -- common/autotest_common.sh@1111 -- # append 00:31:18.510 00:45:12 -- dd/posix.sh@16 -- # local dump0 00:31:18.510 00:45:12 -- dd/posix.sh@17 -- # local dump1 00:31:18.510 00:45:12 -- dd/posix.sh@19 -- # gen_bytes 32 00:31:18.510 00:45:12 -- dd/common.sh@98 -- # xtrace_disable 00:31:18.510 00:45:12 -- common/autotest_common.sh@10 -- # set +x 00:31:18.510 00:45:12 -- dd/posix.sh@19 -- # dump0=24g4bauawtiqeoj3v39qhjvtbbtkudir 00:31:18.510 00:45:12 -- dd/posix.sh@20 -- # gen_bytes 32 00:31:18.510 00:45:12 -- dd/common.sh@98 -- # xtrace_disable 00:31:18.510 00:45:12 -- common/autotest_common.sh@10 -- # set +x 00:31:18.510 00:45:12 -- dd/posix.sh@20 -- # dump1=3lw20mha1sqe8j93d8lo7tjzd9zvs2zt 00:31:18.510 00:45:12 -- dd/posix.sh@22 -- # printf %s 24g4bauawtiqeoj3v39qhjvtbbtkudir 00:31:18.510 00:45:12 -- dd/posix.sh@23 -- # printf %s 3lw20mha1sqe8j93d8lo7tjzd9zvs2zt 00:31:18.510 00:45:12 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:31:18.510 [2024-04-24 00:45:12.286856] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:18.510 [2024-04-24 00:45:12.287077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143734 ] 00:31:18.768 [2024-04-24 00:45:12.469683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.025 [2024-04-24 00:45:12.764341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.979  Copying: 32/32 [B] (average 31 kBps) 00:31:20.979 00:31:20.979 ************************************ 00:31:20.979 END TEST dd_flag_append 00:31:20.979 ************************************ 00:31:20.979 00:45:14 -- dd/posix.sh@27 -- # [[ 3lw20mha1sqe8j93d8lo7tjzd9zvs2zt24g4bauawtiqeoj3v39qhjvtbbtkudir == \3\l\w\2\0\m\h\a\1\s\q\e\8\j\9\3\d\8\l\o\7\t\j\z\d\9\z\v\s\2\z\t\2\4\g\4\b\a\u\a\w\t\i\q\e\o\j\3\v\3\9\q\h\j\v\t\b\b\t\k\u\d\i\r ]] 00:31:20.979 00:31:20.979 real 0m2.279s 00:31:20.979 user 0m1.903s 00:31:20.979 sys 0m0.244s 00:31:20.979 00:45:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:20.979 00:45:14 -- common/autotest_common.sh@10 -- # set +x 00:31:20.979 00:45:14 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:31:20.979 00:45:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:20.979 00:45:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:20.979 00:45:14 -- common/autotest_common.sh@10 -- # set +x 00:31:20.979 ************************************ 00:31:20.979 START TEST dd_flag_directory 00:31:20.979 ************************************ 00:31:20.979 00:45:14 -- common/autotest_common.sh@1111 -- # directory 00:31:20.979 00:45:14 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:20.979 00:45:14 -- common/autotest_common.sh@638 -- # local es=0 00:31:20.979 00:45:14 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:20.979 00:45:14 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:20.979 00:45:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:20.979 00:45:14 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:20.979 00:45:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:20.979 00:45:14 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:20.979 00:45:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:20.979 00:45:14 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:20.979 00:45:14 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:20.979 00:45:14 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:20.979 [2024-04-24 00:45:14.664410] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:20.979 [2024-04-24 00:45:14.664599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143789 ] 00:31:21.246 [2024-04-24 00:45:14.851437] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.505 [2024-04-24 00:45:15.137651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.762 [2024-04-24 00:45:15.497115] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:21.763 [2024-04-24 00:45:15.497200] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:21.763 [2024-04-24 00:45:15.497226] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:22.698 [2024-04-24 00:45:16.339029] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:23.264 00:45:16 -- common/autotest_common.sh@641 -- # es=236 00:31:23.264 00:45:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:23.264 00:45:16 -- common/autotest_common.sh@650 -- # es=108 00:31:23.264 00:45:16 -- common/autotest_common.sh@651 -- # case "$es" in 00:31:23.264 00:45:16 -- common/autotest_common.sh@658 -- # es=1 00:31:23.264 00:45:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:23.264 00:45:16 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:31:23.264 00:45:16 -- common/autotest_common.sh@638 -- # local es=0 00:31:23.264 00:45:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:31:23.264 00:45:16 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:23.264 00:45:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:23.264 00:45:16 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:23.264 00:45:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:23.264 00:45:16 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:23.264 00:45:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:23.264 00:45:16 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:23.264 00:45:16 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:23.264 00:45:16 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:31:23.264 [2024-04-24 00:45:16.866345] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:23.265 [2024-04-24 00:45:16.866538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143824 ] 00:31:23.265 [2024-04-24 00:45:17.046094] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.523 [2024-04-24 00:45:17.265328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.089 [2024-04-24 00:45:17.619314] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:24.089 [2024-04-24 00:45:17.619392] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:24.089 [2024-04-24 00:45:17.619418] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:25.042 [2024-04-24 00:45:18.473856] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:25.300 00:45:18 -- common/autotest_common.sh@641 -- # es=236 00:31:25.300 00:45:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:25.300 00:45:18 -- common/autotest_common.sh@650 -- # es=108 00:31:25.300 00:45:18 -- common/autotest_common.sh@651 -- # case "$es" in 00:31:25.300 00:45:18 -- common/autotest_common.sh@658 -- # es=1 00:31:25.300 00:45:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:25.300 00:31:25.300 real 0m4.361s 00:31:25.300 user 0m3.668s 00:31:25.300 sys 0m0.492s 00:31:25.300 00:45:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:25.300 00:45:18 -- common/autotest_common.sh@10 -- # set +x 00:31:25.300 ************************************ 00:31:25.300 END TEST dd_flag_directory 00:31:25.300 ************************************ 00:31:25.300 00:45:18 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:31:25.300 00:45:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:25.300 00:45:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:25.300 00:45:18 -- common/autotest_common.sh@10 -- # set +x 00:31:25.300 ************************************ 00:31:25.300 START TEST dd_flag_nofollow 00:31:25.300 ************************************ 00:31:25.300 00:45:19 -- common/autotest_common.sh@1111 -- # nofollow 00:31:25.300 00:45:19 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:31:25.300 00:45:19 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:31:25.300 00:45:19 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:31:25.300 00:45:19 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:31:25.300 00:45:19 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:25.300 00:45:19 -- common/autotest_common.sh@638 -- # local es=0 00:31:25.300 00:45:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:25.300 00:45:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:25.300 00:45:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:25.300 00:45:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:25.300 00:45:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:25.300 00:45:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:25.300 00:45:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:25.300 00:45:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:25.300 00:45:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:25.300 00:45:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:25.558 [2024-04-24 00:45:19.130643] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:25.558 [2024-04-24 00:45:19.130882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143879 ] 00:31:25.558 [2024-04-24 00:45:19.302905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.816 [2024-04-24 00:45:19.594974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.384 [2024-04-24 00:45:19.936313] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:31:26.384 [2024-04-24 00:45:19.936399] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:31:26.384 [2024-04-24 00:45:19.936426] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:27.323 [2024-04-24 00:45:20.817575] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:27.581 00:45:21 -- common/autotest_common.sh@641 -- # es=216 00:31:27.581 00:45:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:27.581 00:45:21 -- common/autotest_common.sh@650 -- # es=88 00:31:27.581 00:45:21 -- common/autotest_common.sh@651 -- # case "$es" in 00:31:27.581 00:45:21 -- common/autotest_common.sh@658 -- # es=1 00:31:27.581 00:45:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:27.581 00:45:21 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:27.581 00:45:21 -- common/autotest_common.sh@638 -- # local es=0 00:31:27.581 00:45:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:27.581 00:45:21 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:27.581 00:45:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:27.581 00:45:21 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:27.581 00:45:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:27.581 00:45:21 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:27.581 00:45:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:27.581 00:45:21 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:27.581 00:45:21 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:27.581 00:45:21 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:31:27.839 [2024-04-24 00:45:21.379898] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:27.839 [2024-04-24 00:45:21.380094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143906 ] 00:31:27.839 [2024-04-24 00:45:21.558313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.098 [2024-04-24 00:45:21.793398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.700 [2024-04-24 00:45:22.150682] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:31:28.700 [2024-04-24 00:45:22.150782] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:31:28.700 [2024-04-24 00:45:22.150808] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:29.282 [2024-04-24 00:45:23.007172] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:31:29.849 00:45:23 -- common/autotest_common.sh@641 -- # es=216 00:31:29.849 00:45:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:29.849 00:45:23 -- common/autotest_common.sh@650 -- # es=88 00:31:29.849 00:45:23 -- common/autotest_common.sh@651 -- # case "$es" in 00:31:29.849 00:45:23 -- common/autotest_common.sh@658 -- # es=1 00:31:29.849 00:45:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:29.849 00:45:23 -- dd/posix.sh@46 -- # gen_bytes 512 00:31:29.849 00:45:23 -- dd/common.sh@98 -- # xtrace_disable 00:31:29.849 00:45:23 -- common/autotest_common.sh@10 -- # set +x 00:31:29.849 00:45:23 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:29.849 [2024-04-24 00:45:23.558056] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:29.849 [2024-04-24 00:45:23.558255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143940 ] 00:31:30.106 [2024-04-24 00:45:23.738415] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.363 [2024-04-24 00:45:23.959946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.008  Copying: 512/512 [B] (average 500 kBps) 00:31:32.008 00:31:32.008 00:45:25 -- dd/posix.sh@49 -- # [[ 38xyo4lrmi83mgr1pqnhzjcjach6nbhutqpoh0mtpz4xlzfj1c1d4sqptj5qh1x0zzhsu2lzicwy6biytwlhn1qxsamn9yoetgin93lcavppuq5ze5hj98lhbil41m166n67rmjyz0otox8ksjkmpglknsadqzjsyijvbcont6z8boi2voqngv2198lt9g57jr57qk34u2n4euobuqndqwhossfyla5o0oz2gy2zsg9s1ext4urwexrtkjfm7orexmxyoc7ex1wiud0w69jq8nqsd5xd9jsgw30z19jyuvme6vg2hrpmkzjhbj0t0eltpf2ewdjq4iroozsvgrk3661xof2mplojbc2zmo7le4f3ffbf6fxcvup5yhggnbqfhbb60yrekhmn3xj4g3o0glr9fdp0hmm3wnlijoci2adbi4bcl9iz6u6yr9pyipi1rjjkohp7yj0gopw5gv216bih9j75djn2off3zb4mk7rioh9w72ns8ixwjl1nm13k == \3\8\x\y\o\4\l\r\m\i\8\3\m\g\r\1\p\q\n\h\z\j\c\j\a\c\h\6\n\b\h\u\t\q\p\o\h\0\m\t\p\z\4\x\l\z\f\j\1\c\1\d\4\s\q\p\t\j\5\q\h\1\x\0\z\z\h\s\u\2\l\z\i\c\w\y\6\b\i\y\t\w\l\h\n\1\q\x\s\a\m\n\9\y\o\e\t\g\i\n\9\3\l\c\a\v\p\p\u\q\5\z\e\5\h\j\9\8\l\h\b\i\l\4\1\m\1\6\6\n\6\7\r\m\j\y\z\0\o\t\o\x\8\k\s\j\k\m\p\g\l\k\n\s\a\d\q\z\j\s\y\i\j\v\b\c\o\n\t\6\z\8\b\o\i\2\v\o\q\n\g\v\2\1\9\8\l\t\9\g\5\7\j\r\5\7\q\k\3\4\u\2\n\4\e\u\o\b\u\q\n\d\q\w\h\o\s\s\f\y\l\a\5\o\0\o\z\2\g\y\2\z\s\g\9\s\1\e\x\t\4\u\r\w\e\x\r\t\k\j\f\m\7\o\r\e\x\m\x\y\o\c\7\e\x\1\w\i\u\d\0\w\6\9\j\q\8\n\q\s\d\5\x\d\9\j\s\g\w\3\0\z\1\9\j\y\u\v\m\e\6\v\g\2\h\r\p\m\k\z\j\h\b\j\0\t\0\e\l\t\p\f\2\e\w\d\j\q\4\i\r\o\o\z\s\v\g\r\k\3\6\6\1\x\o\f\2\m\p\l\o\j\b\c\2\z\m\o\7\l\e\4\f\3\f\f\b\f\6\f\x\c\v\u\p\5\y\h\g\g\n\b\q\f\h\b\b\6\0\y\r\e\k\h\m\n\3\x\j\4\g\3\o\0\g\l\r\9\f\d\p\0\h\m\m\3\w\n\l\i\j\o\c\i\2\a\d\b\i\4\b\c\l\9\i\z\6\u\6\y\r\9\p\y\i\p\i\1\r\j\j\k\o\h\p\7\y\j\0\g\o\p\w\5\g\v\2\1\6\b\i\h\9\j\7\5\d\j\n\2\o\f\f\3\z\b\4\m\k\7\r\i\o\h\9\w\7\2\n\s\8\i\x\w\j\l\1\n\m\1\3\k ]] 00:31:32.008 ************************************ 00:31:32.008 END TEST dd_flag_nofollow 00:31:32.008 ************************************ 00:31:32.008 00:31:32.008 real 0m6.644s 00:31:32.008 user 0m5.632s 00:31:32.008 sys 0m0.687s 00:31:32.008 00:45:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:32.008 00:45:25 -- common/autotest_common.sh@10 -- # set +x 00:31:32.008 00:45:25 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:31:32.008 00:45:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:32.008 00:45:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:32.008 00:45:25 -- common/autotest_common.sh@10 -- # set +x 00:31:32.008 ************************************ 00:31:32.008 START TEST dd_flag_noatime 00:31:32.008 ************************************ 00:31:32.008 00:45:25 -- common/autotest_common.sh@1111 -- # noatime 00:31:32.008 00:45:25 -- dd/posix.sh@53 -- # local atime_if 00:31:32.008 00:45:25 -- dd/posix.sh@54 -- # local atime_of 00:31:32.008 00:45:25 -- dd/posix.sh@58 -- # gen_bytes 512 00:31:32.008 00:45:25 -- dd/common.sh@98 -- # xtrace_disable 00:31:32.008 00:45:25 -- common/autotest_common.sh@10 -- # set +x 00:31:32.008 00:45:25 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:32.008 00:45:25 -- dd/posix.sh@60 -- # atime_if=1713919524 00:31:32.008 00:45:25 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:32.008 00:45:25 -- dd/posix.sh@61 -- # atime_of=1713919525 00:31:32.008 00:45:25 -- dd/posix.sh@66 -- # sleep 1 00:31:33.396 00:45:26 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:33.396 [2024-04-24 00:45:26.879165] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:33.396 [2024-04-24 00:45:26.879356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144015 ] 00:31:33.396 [2024-04-24 00:45:27.061506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.654 [2024-04-24 00:45:27.354030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.628  Copying: 512/512 [B] (average 500 kBps) 00:31:35.628 00:31:35.628 00:45:29 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:35.628 00:45:29 -- dd/posix.sh@69 -- # (( atime_if == 1713919524 )) 00:31:35.628 00:45:29 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:35.628 00:45:29 -- dd/posix.sh@70 -- # (( atime_of == 1713919525 )) 00:31:35.628 00:45:29 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:31:35.628 [2024-04-24 00:45:29.196378] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:35.628 [2024-04-24 00:45:29.196549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144043 ] 00:31:35.628 [2024-04-24 00:45:29.363932] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.886 [2024-04-24 00:45:29.679564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.824  Copying: 512/512 [B] (average 500 kBps) 00:31:37.824 00:31:37.824 00:45:31 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:37.824 00:45:31 -- dd/posix.sh@73 -- # (( atime_if < 1713919530 )) 00:31:37.824 00:31:37.824 real 0m5.703s 00:31:37.824 user 0m4.001s 00:31:37.824 sys 0m0.443s 00:31:37.824 00:45:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:37.824 00:45:31 -- common/autotest_common.sh@10 -- # set +x 00:31:37.824 ************************************ 00:31:37.824 END TEST dd_flag_noatime 00:31:37.824 ************************************ 00:31:37.824 00:45:31 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:31:37.824 00:45:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:37.824 00:45:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:37.824 00:45:31 -- common/autotest_common.sh@10 -- # set +x 00:31:37.824 ************************************ 00:31:37.824 START TEST dd_flags_misc 00:31:37.824 ************************************ 00:31:37.824 00:45:31 -- common/autotest_common.sh@1111 -- # io 00:31:37.824 00:45:31 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:31:37.824 00:45:31 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:31:37.824 00:45:31 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:31:37.824 00:45:31 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:31:37.824 00:45:31 -- dd/posix.sh@86 -- # gen_bytes 512 00:31:37.824 00:45:31 -- dd/common.sh@98 -- # xtrace_disable 00:31:37.824 00:45:31 -- common/autotest_common.sh@10 -- # set +x 00:31:37.824 00:45:31 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:37.824 00:45:31 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:31:38.088 [2024-04-24 00:45:31.644633] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:38.088 [2024-04-24 00:45:31.644836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144095 ] 00:31:38.088 [2024-04-24 00:45:31.810571] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.345 [2024-04-24 00:45:32.057941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.284  Copying: 512/512 [B] (average 500 kBps) 00:31:40.284 00:31:40.284 00:45:33 -- dd/posix.sh@93 -- # [[ gxfpiexrkp5wzydvk90pwcz3m95fqyc7p8z815wd5udrmkugkn5vni746n3xozrftvdxaoyqv5v0aftybtsm4fr4fopv7k4dm09sdi0tmh48a9tfmcawwkcl081sbyde364a5eov524ewwbhmzs2id66yf20enfeun7tivz2yv19u19dw7stsbg00jkbta4c8qh19ikmov7bdz5xqp6fh0b4akpw0fcsozp3hogpdenqz2rq9kzbeapzi3u9dst3o8rqwqeqzu5gy4fe0yt8lmfxvigxmm2fvji7xmuoiivs6bc1735wy7zxyikzcnvdeo0bshsj2jxeqa2r6zyup645d35m5kzvafxu4sh2ag78p1aky0qm2a1z1v4siiua1b1o5f3ypcaj1uxnt2ldsf6duhi9xn0n35jher6mep9qjmo3yhb2roubis40a722avuxsl9mn6aiyr64nxoonvs234ffws4fe4co3vcr25d63nuu4ye3h2kz54lgxgra == \g\x\f\p\i\e\x\r\k\p\5\w\z\y\d\v\k\9\0\p\w\c\z\3\m\9\5\f\q\y\c\7\p\8\z\8\1\5\w\d\5\u\d\r\m\k\u\g\k\n\5\v\n\i\7\4\6\n\3\x\o\z\r\f\t\v\d\x\a\o\y\q\v\5\v\0\a\f\t\y\b\t\s\m\4\f\r\4\f\o\p\v\7\k\4\d\m\0\9\s\d\i\0\t\m\h\4\8\a\9\t\f\m\c\a\w\w\k\c\l\0\8\1\s\b\y\d\e\3\6\4\a\5\e\o\v\5\2\4\e\w\w\b\h\m\z\s\2\i\d\6\6\y\f\2\0\e\n\f\e\u\n\7\t\i\v\z\2\y\v\1\9\u\1\9\d\w\7\s\t\s\b\g\0\0\j\k\b\t\a\4\c\8\q\h\1\9\i\k\m\o\v\7\b\d\z\5\x\q\p\6\f\h\0\b\4\a\k\p\w\0\f\c\s\o\z\p\3\h\o\g\p\d\e\n\q\z\2\r\q\9\k\z\b\e\a\p\z\i\3\u\9\d\s\t\3\o\8\r\q\w\q\e\q\z\u\5\g\y\4\f\e\0\y\t\8\l\m\f\x\v\i\g\x\m\m\2\f\v\j\i\7\x\m\u\o\i\i\v\s\6\b\c\1\7\3\5\w\y\7\z\x\y\i\k\z\c\n\v\d\e\o\0\b\s\h\s\j\2\j\x\e\q\a\2\r\6\z\y\u\p\6\4\5\d\3\5\m\5\k\z\v\a\f\x\u\4\s\h\2\a\g\7\8\p\1\a\k\y\0\q\m\2\a\1\z\1\v\4\s\i\i\u\a\1\b\1\o\5\f\3\y\p\c\a\j\1\u\x\n\t\2\l\d\s\f\6\d\u\h\i\9\x\n\0\n\3\5\j\h\e\r\6\m\e\p\9\q\j\m\o\3\y\h\b\2\r\o\u\b\i\s\4\0\a\7\2\2\a\v\u\x\s\l\9\m\n\6\a\i\y\r\6\4\n\x\o\o\n\v\s\2\3\4\f\f\w\s\4\f\e\4\c\o\3\v\c\r\2\5\d\6\3\n\u\u\4\y\e\3\h\2\k\z\5\4\l\g\x\g\r\a ]] 00:31:40.284 00:45:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:40.284 00:45:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:31:40.284 [2024-04-24 00:45:33.942199] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:40.284 [2024-04-24 00:45:33.942432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144135 ] 00:31:40.541 [2024-04-24 00:45:34.121710] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.798 [2024-04-24 00:45:34.353251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.433  Copying: 512/512 [B] (average 500 kBps) 00:31:42.433 00:31:42.433 00:45:36 -- dd/posix.sh@93 -- # [[ gxfpiexrkp5wzydvk90pwcz3m95fqyc7p8z815wd5udrmkugkn5vni746n3xozrftvdxaoyqv5v0aftybtsm4fr4fopv7k4dm09sdi0tmh48a9tfmcawwkcl081sbyde364a5eov524ewwbhmzs2id66yf20enfeun7tivz2yv19u19dw7stsbg00jkbta4c8qh19ikmov7bdz5xqp6fh0b4akpw0fcsozp3hogpdenqz2rq9kzbeapzi3u9dst3o8rqwqeqzu5gy4fe0yt8lmfxvigxmm2fvji7xmuoiivs6bc1735wy7zxyikzcnvdeo0bshsj2jxeqa2r6zyup645d35m5kzvafxu4sh2ag78p1aky0qm2a1z1v4siiua1b1o5f3ypcaj1uxnt2ldsf6duhi9xn0n35jher6mep9qjmo3yhb2roubis40a722avuxsl9mn6aiyr64nxoonvs234ffws4fe4co3vcr25d63nuu4ye3h2kz54lgxgra == \g\x\f\p\i\e\x\r\k\p\5\w\z\y\d\v\k\9\0\p\w\c\z\3\m\9\5\f\q\y\c\7\p\8\z\8\1\5\w\d\5\u\d\r\m\k\u\g\k\n\5\v\n\i\7\4\6\n\3\x\o\z\r\f\t\v\d\x\a\o\y\q\v\5\v\0\a\f\t\y\b\t\s\m\4\f\r\4\f\o\p\v\7\k\4\d\m\0\9\s\d\i\0\t\m\h\4\8\a\9\t\f\m\c\a\w\w\k\c\l\0\8\1\s\b\y\d\e\3\6\4\a\5\e\o\v\5\2\4\e\w\w\b\h\m\z\s\2\i\d\6\6\y\f\2\0\e\n\f\e\u\n\7\t\i\v\z\2\y\v\1\9\u\1\9\d\w\7\s\t\s\b\g\0\0\j\k\b\t\a\4\c\8\q\h\1\9\i\k\m\o\v\7\b\d\z\5\x\q\p\6\f\h\0\b\4\a\k\p\w\0\f\c\s\o\z\p\3\h\o\g\p\d\e\n\q\z\2\r\q\9\k\z\b\e\a\p\z\i\3\u\9\d\s\t\3\o\8\r\q\w\q\e\q\z\u\5\g\y\4\f\e\0\y\t\8\l\m\f\x\v\i\g\x\m\m\2\f\v\j\i\7\x\m\u\o\i\i\v\s\6\b\c\1\7\3\5\w\y\7\z\x\y\i\k\z\c\n\v\d\e\o\0\b\s\h\s\j\2\j\x\e\q\a\2\r\6\z\y\u\p\6\4\5\d\3\5\m\5\k\z\v\a\f\x\u\4\s\h\2\a\g\7\8\p\1\a\k\y\0\q\m\2\a\1\z\1\v\4\s\i\i\u\a\1\b\1\o\5\f\3\y\p\c\a\j\1\u\x\n\t\2\l\d\s\f\6\d\u\h\i\9\x\n\0\n\3\5\j\h\e\r\6\m\e\p\9\q\j\m\o\3\y\h\b\2\r\o\u\b\i\s\4\0\a\7\2\2\a\v\u\x\s\l\9\m\n\6\a\i\y\r\6\4\n\x\o\o\n\v\s\2\3\4\f\f\w\s\4\f\e\4\c\o\3\v\c\r\2\5\d\6\3\n\u\u\4\y\e\3\h\2\k\z\5\4\l\g\x\g\r\a ]] 00:31:42.433 00:45:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:42.433 00:45:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:31:42.433 [2024-04-24 00:45:36.226065] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:42.433 [2024-04-24 00:45:36.226392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144165 ] 00:31:42.691 [2024-04-24 00:45:36.403640] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.949 [2024-04-24 00:45:36.666126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.940  Copying: 512/512 [B] (average 250 kBps) 00:31:44.940 00:31:44.940 00:45:38 -- dd/posix.sh@93 -- # [[ gxfpiexrkp5wzydvk90pwcz3m95fqyc7p8z815wd5udrmkugkn5vni746n3xozrftvdxaoyqv5v0aftybtsm4fr4fopv7k4dm09sdi0tmh48a9tfmcawwkcl081sbyde364a5eov524ewwbhmzs2id66yf20enfeun7tivz2yv19u19dw7stsbg00jkbta4c8qh19ikmov7bdz5xqp6fh0b4akpw0fcsozp3hogpdenqz2rq9kzbeapzi3u9dst3o8rqwqeqzu5gy4fe0yt8lmfxvigxmm2fvji7xmuoiivs6bc1735wy7zxyikzcnvdeo0bshsj2jxeqa2r6zyup645d35m5kzvafxu4sh2ag78p1aky0qm2a1z1v4siiua1b1o5f3ypcaj1uxnt2ldsf6duhi9xn0n35jher6mep9qjmo3yhb2roubis40a722avuxsl9mn6aiyr64nxoonvs234ffws4fe4co3vcr25d63nuu4ye3h2kz54lgxgra == \g\x\f\p\i\e\x\r\k\p\5\w\z\y\d\v\k\9\0\p\w\c\z\3\m\9\5\f\q\y\c\7\p\8\z\8\1\5\w\d\5\u\d\r\m\k\u\g\k\n\5\v\n\i\7\4\6\n\3\x\o\z\r\f\t\v\d\x\a\o\y\q\v\5\v\0\a\f\t\y\b\t\s\m\4\f\r\4\f\o\p\v\7\k\4\d\m\0\9\s\d\i\0\t\m\h\4\8\a\9\t\f\m\c\a\w\w\k\c\l\0\8\1\s\b\y\d\e\3\6\4\a\5\e\o\v\5\2\4\e\w\w\b\h\m\z\s\2\i\d\6\6\y\f\2\0\e\n\f\e\u\n\7\t\i\v\z\2\y\v\1\9\u\1\9\d\w\7\s\t\s\b\g\0\0\j\k\b\t\a\4\c\8\q\h\1\9\i\k\m\o\v\7\b\d\z\5\x\q\p\6\f\h\0\b\4\a\k\p\w\0\f\c\s\o\z\p\3\h\o\g\p\d\e\n\q\z\2\r\q\9\k\z\b\e\a\p\z\i\3\u\9\d\s\t\3\o\8\r\q\w\q\e\q\z\u\5\g\y\4\f\e\0\y\t\8\l\m\f\x\v\i\g\x\m\m\2\f\v\j\i\7\x\m\u\o\i\i\v\s\6\b\c\1\7\3\5\w\y\7\z\x\y\i\k\z\c\n\v\d\e\o\0\b\s\h\s\j\2\j\x\e\q\a\2\r\6\z\y\u\p\6\4\5\d\3\5\m\5\k\z\v\a\f\x\u\4\s\h\2\a\g\7\8\p\1\a\k\y\0\q\m\2\a\1\z\1\v\4\s\i\i\u\a\1\b\1\o\5\f\3\y\p\c\a\j\1\u\x\n\t\2\l\d\s\f\6\d\u\h\i\9\x\n\0\n\3\5\j\h\e\r\6\m\e\p\9\q\j\m\o\3\y\h\b\2\r\o\u\b\i\s\4\0\a\7\2\2\a\v\u\x\s\l\9\m\n\6\a\i\y\r\6\4\n\x\o\o\n\v\s\2\3\4\f\f\w\s\4\f\e\4\c\o\3\v\c\r\2\5\d\6\3\n\u\u\4\y\e\3\h\2\k\z\5\4\l\g\x\g\r\a ]] 00:31:44.940 00:45:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:44.940 00:45:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:31:44.940 [2024-04-24 00:45:38.508729] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:44.940 [2024-04-24 00:45:38.509204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144190 ] 00:31:44.940 [2024-04-24 00:45:38.687378] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.198 [2024-04-24 00:45:38.918398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.172  Copying: 512/512 [B] (average 250 kBps) 00:31:47.172 00:31:47.172 00:45:40 -- dd/posix.sh@93 -- # [[ gxfpiexrkp5wzydvk90pwcz3m95fqyc7p8z815wd5udrmkugkn5vni746n3xozrftvdxaoyqv5v0aftybtsm4fr4fopv7k4dm09sdi0tmh48a9tfmcawwkcl081sbyde364a5eov524ewwbhmzs2id66yf20enfeun7tivz2yv19u19dw7stsbg00jkbta4c8qh19ikmov7bdz5xqp6fh0b4akpw0fcsozp3hogpdenqz2rq9kzbeapzi3u9dst3o8rqwqeqzu5gy4fe0yt8lmfxvigxmm2fvji7xmuoiivs6bc1735wy7zxyikzcnvdeo0bshsj2jxeqa2r6zyup645d35m5kzvafxu4sh2ag78p1aky0qm2a1z1v4siiua1b1o5f3ypcaj1uxnt2ldsf6duhi9xn0n35jher6mep9qjmo3yhb2roubis40a722avuxsl9mn6aiyr64nxoonvs234ffws4fe4co3vcr25d63nuu4ye3h2kz54lgxgra == \g\x\f\p\i\e\x\r\k\p\5\w\z\y\d\v\k\9\0\p\w\c\z\3\m\9\5\f\q\y\c\7\p\8\z\8\1\5\w\d\5\u\d\r\m\k\u\g\k\n\5\v\n\i\7\4\6\n\3\x\o\z\r\f\t\v\d\x\a\o\y\q\v\5\v\0\a\f\t\y\b\t\s\m\4\f\r\4\f\o\p\v\7\k\4\d\m\0\9\s\d\i\0\t\m\h\4\8\a\9\t\f\m\c\a\w\w\k\c\l\0\8\1\s\b\y\d\e\3\6\4\a\5\e\o\v\5\2\4\e\w\w\b\h\m\z\s\2\i\d\6\6\y\f\2\0\e\n\f\e\u\n\7\t\i\v\z\2\y\v\1\9\u\1\9\d\w\7\s\t\s\b\g\0\0\j\k\b\t\a\4\c\8\q\h\1\9\i\k\m\o\v\7\b\d\z\5\x\q\p\6\f\h\0\b\4\a\k\p\w\0\f\c\s\o\z\p\3\h\o\g\p\d\e\n\q\z\2\r\q\9\k\z\b\e\a\p\z\i\3\u\9\d\s\t\3\o\8\r\q\w\q\e\q\z\u\5\g\y\4\f\e\0\y\t\8\l\m\f\x\v\i\g\x\m\m\2\f\v\j\i\7\x\m\u\o\i\i\v\s\6\b\c\1\7\3\5\w\y\7\z\x\y\i\k\z\c\n\v\d\e\o\0\b\s\h\s\j\2\j\x\e\q\a\2\r\6\z\y\u\p\6\4\5\d\3\5\m\5\k\z\v\a\f\x\u\4\s\h\2\a\g\7\8\p\1\a\k\y\0\q\m\2\a\1\z\1\v\4\s\i\i\u\a\1\b\1\o\5\f\3\y\p\c\a\j\1\u\x\n\t\2\l\d\s\f\6\d\u\h\i\9\x\n\0\n\3\5\j\h\e\r\6\m\e\p\9\q\j\m\o\3\y\h\b\2\r\o\u\b\i\s\4\0\a\7\2\2\a\v\u\x\s\l\9\m\n\6\a\i\y\r\6\4\n\x\o\o\n\v\s\2\3\4\f\f\w\s\4\f\e\4\c\o\3\v\c\r\2\5\d\6\3\n\u\u\4\y\e\3\h\2\k\z\5\4\l\g\x\g\r\a ]] 00:31:47.172 00:45:40 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:31:47.172 00:45:40 -- dd/posix.sh@86 -- # gen_bytes 512 00:31:47.172 00:45:40 -- dd/common.sh@98 -- # xtrace_disable 00:31:47.172 00:45:40 -- common/autotest_common.sh@10 -- # set +x 00:31:47.172 00:45:40 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:47.172 00:45:40 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:31:47.172 [2024-04-24 00:45:40.743652] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:47.172 [2024-04-24 00:45:40.744167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144219 ] 00:31:47.172 [2024-04-24 00:45:40.934520] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.739 [2024-04-24 00:45:41.229542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.372  Copying: 512/512 [B] (average 500 kBps) 00:31:49.372 00:31:49.372 00:45:43 -- dd/posix.sh@93 -- # [[ nlib2r87sfyyeuabqrotayxelahcos22sh1n0cwpbr89gvuz6x8r5zw087n1dtbaxwjhucvne69ag984eny0v2pt3utqlagm7v17uvoj2odl8drodzazzy6v5xpvhnyuo5sac8tz8a5g667ygyoyeqfuojglin23q1pniv4imimb7ku1yi937yb6f5a9tbhh89cq6flf8s3iw8mo004b513dmp3thixsum7y4d9f6fbpla6yfeze51bwwq1jj3bembhsu4svqeesioxxs7nh7u47jpqs9yrnu4tw7uzpwn8meowel5yxxmhatjunyxq6ujlhjjpr4qb5xnnj4q7qbo639wg21il075g0oa5zl68t9ji5wbfcogwdo83ls6lej4lcox0wfmx29f2qltny75bxx2ysbw2pjzbyma9f9l6gv8h7s9zims33jmkgwq46731dp4gu3trkqukvjo35evddjj7fspx7onaduvcaxnexo488qzdad6m19ilb5n51 == \n\l\i\b\2\r\8\7\s\f\y\y\e\u\a\b\q\r\o\t\a\y\x\e\l\a\h\c\o\s\2\2\s\h\1\n\0\c\w\p\b\r\8\9\g\v\u\z\6\x\8\r\5\z\w\0\8\7\n\1\d\t\b\a\x\w\j\h\u\c\v\n\e\6\9\a\g\9\8\4\e\n\y\0\v\2\p\t\3\u\t\q\l\a\g\m\7\v\1\7\u\v\o\j\2\o\d\l\8\d\r\o\d\z\a\z\z\y\6\v\5\x\p\v\h\n\y\u\o\5\s\a\c\8\t\z\8\a\5\g\6\6\7\y\g\y\o\y\e\q\f\u\o\j\g\l\i\n\2\3\q\1\p\n\i\v\4\i\m\i\m\b\7\k\u\1\y\i\9\3\7\y\b\6\f\5\a\9\t\b\h\h\8\9\c\q\6\f\l\f\8\s\3\i\w\8\m\o\0\0\4\b\5\1\3\d\m\p\3\t\h\i\x\s\u\m\7\y\4\d\9\f\6\f\b\p\l\a\6\y\f\e\z\e\5\1\b\w\w\q\1\j\j\3\b\e\m\b\h\s\u\4\s\v\q\e\e\s\i\o\x\x\s\7\n\h\7\u\4\7\j\p\q\s\9\y\r\n\u\4\t\w\7\u\z\p\w\n\8\m\e\o\w\e\l\5\y\x\x\m\h\a\t\j\u\n\y\x\q\6\u\j\l\h\j\j\p\r\4\q\b\5\x\n\n\j\4\q\7\q\b\o\6\3\9\w\g\2\1\i\l\0\7\5\g\0\o\a\5\z\l\6\8\t\9\j\i\5\w\b\f\c\o\g\w\d\o\8\3\l\s\6\l\e\j\4\l\c\o\x\0\w\f\m\x\2\9\f\2\q\l\t\n\y\7\5\b\x\x\2\y\s\b\w\2\p\j\z\b\y\m\a\9\f\9\l\6\g\v\8\h\7\s\9\z\i\m\s\3\3\j\m\k\g\w\q\4\6\7\3\1\d\p\4\g\u\3\t\r\k\q\u\k\v\j\o\3\5\e\v\d\d\j\j\7\f\s\p\x\7\o\n\a\d\u\v\c\a\x\n\e\x\o\4\8\8\q\z\d\a\d\6\m\1\9\i\l\b\5\n\5\1 ]] 00:31:49.372 00:45:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:49.372 00:45:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:31:49.630 [2024-04-24 00:45:43.233506] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:49.630 [2024-04-24 00:45:43.234101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144255 ] 00:31:49.630 [2024-04-24 00:45:43.420248] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.887 [2024-04-24 00:45:43.657533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.826  Copying: 512/512 [B] (average 500 kBps) 00:31:51.826 00:31:51.826 00:45:45 -- dd/posix.sh@93 -- # [[ nlib2r87sfyyeuabqrotayxelahcos22sh1n0cwpbr89gvuz6x8r5zw087n1dtbaxwjhucvne69ag984eny0v2pt3utqlagm7v17uvoj2odl8drodzazzy6v5xpvhnyuo5sac8tz8a5g667ygyoyeqfuojglin23q1pniv4imimb7ku1yi937yb6f5a9tbhh89cq6flf8s3iw8mo004b513dmp3thixsum7y4d9f6fbpla6yfeze51bwwq1jj3bembhsu4svqeesioxxs7nh7u47jpqs9yrnu4tw7uzpwn8meowel5yxxmhatjunyxq6ujlhjjpr4qb5xnnj4q7qbo639wg21il075g0oa5zl68t9ji5wbfcogwdo83ls6lej4lcox0wfmx29f2qltny75bxx2ysbw2pjzbyma9f9l6gv8h7s9zims33jmkgwq46731dp4gu3trkqukvjo35evddjj7fspx7onaduvcaxnexo488qzdad6m19ilb5n51 == \n\l\i\b\2\r\8\7\s\f\y\y\e\u\a\b\q\r\o\t\a\y\x\e\l\a\h\c\o\s\2\2\s\h\1\n\0\c\w\p\b\r\8\9\g\v\u\z\6\x\8\r\5\z\w\0\8\7\n\1\d\t\b\a\x\w\j\h\u\c\v\n\e\6\9\a\g\9\8\4\e\n\y\0\v\2\p\t\3\u\t\q\l\a\g\m\7\v\1\7\u\v\o\j\2\o\d\l\8\d\r\o\d\z\a\z\z\y\6\v\5\x\p\v\h\n\y\u\o\5\s\a\c\8\t\z\8\a\5\g\6\6\7\y\g\y\o\y\e\q\f\u\o\j\g\l\i\n\2\3\q\1\p\n\i\v\4\i\m\i\m\b\7\k\u\1\y\i\9\3\7\y\b\6\f\5\a\9\t\b\h\h\8\9\c\q\6\f\l\f\8\s\3\i\w\8\m\o\0\0\4\b\5\1\3\d\m\p\3\t\h\i\x\s\u\m\7\y\4\d\9\f\6\f\b\p\l\a\6\y\f\e\z\e\5\1\b\w\w\q\1\j\j\3\b\e\m\b\h\s\u\4\s\v\q\e\e\s\i\o\x\x\s\7\n\h\7\u\4\7\j\p\q\s\9\y\r\n\u\4\t\w\7\u\z\p\w\n\8\m\e\o\w\e\l\5\y\x\x\m\h\a\t\j\u\n\y\x\q\6\u\j\l\h\j\j\p\r\4\q\b\5\x\n\n\j\4\q\7\q\b\o\6\3\9\w\g\2\1\i\l\0\7\5\g\0\o\a\5\z\l\6\8\t\9\j\i\5\w\b\f\c\o\g\w\d\o\8\3\l\s\6\l\e\j\4\l\c\o\x\0\w\f\m\x\2\9\f\2\q\l\t\n\y\7\5\b\x\x\2\y\s\b\w\2\p\j\z\b\y\m\a\9\f\9\l\6\g\v\8\h\7\s\9\z\i\m\s\3\3\j\m\k\g\w\q\4\6\7\3\1\d\p\4\g\u\3\t\r\k\q\u\k\v\j\o\3\5\e\v\d\d\j\j\7\f\s\p\x\7\o\n\a\d\u\v\c\a\x\n\e\x\o\4\8\8\q\z\d\a\d\6\m\1\9\i\l\b\5\n\5\1 ]] 00:31:51.826 00:45:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:51.826 00:45:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:31:51.826 [2024-04-24 00:45:45.526840] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:51.826 [2024-04-24 00:45:45.527306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144283 ] 00:31:52.083 [2024-04-24 00:45:45.696244] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.341 [2024-04-24 00:45:45.984672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.018  Copying: 512/512 [B] (average 500 kBps) 00:31:54.018 00:31:54.018 00:45:47 -- dd/posix.sh@93 -- # [[ nlib2r87sfyyeuabqrotayxelahcos22sh1n0cwpbr89gvuz6x8r5zw087n1dtbaxwjhucvne69ag984eny0v2pt3utqlagm7v17uvoj2odl8drodzazzy6v5xpvhnyuo5sac8tz8a5g667ygyoyeqfuojglin23q1pniv4imimb7ku1yi937yb6f5a9tbhh89cq6flf8s3iw8mo004b513dmp3thixsum7y4d9f6fbpla6yfeze51bwwq1jj3bembhsu4svqeesioxxs7nh7u47jpqs9yrnu4tw7uzpwn8meowel5yxxmhatjunyxq6ujlhjjpr4qb5xnnj4q7qbo639wg21il075g0oa5zl68t9ji5wbfcogwdo83ls6lej4lcox0wfmx29f2qltny75bxx2ysbw2pjzbyma9f9l6gv8h7s9zims33jmkgwq46731dp4gu3trkqukvjo35evddjj7fspx7onaduvcaxnexo488qzdad6m19ilb5n51 == \n\l\i\b\2\r\8\7\s\f\y\y\e\u\a\b\q\r\o\t\a\y\x\e\l\a\h\c\o\s\2\2\s\h\1\n\0\c\w\p\b\r\8\9\g\v\u\z\6\x\8\r\5\z\w\0\8\7\n\1\d\t\b\a\x\w\j\h\u\c\v\n\e\6\9\a\g\9\8\4\e\n\y\0\v\2\p\t\3\u\t\q\l\a\g\m\7\v\1\7\u\v\o\j\2\o\d\l\8\d\r\o\d\z\a\z\z\y\6\v\5\x\p\v\h\n\y\u\o\5\s\a\c\8\t\z\8\a\5\g\6\6\7\y\g\y\o\y\e\q\f\u\o\j\g\l\i\n\2\3\q\1\p\n\i\v\4\i\m\i\m\b\7\k\u\1\y\i\9\3\7\y\b\6\f\5\a\9\t\b\h\h\8\9\c\q\6\f\l\f\8\s\3\i\w\8\m\o\0\0\4\b\5\1\3\d\m\p\3\t\h\i\x\s\u\m\7\y\4\d\9\f\6\f\b\p\l\a\6\y\f\e\z\e\5\1\b\w\w\q\1\j\j\3\b\e\m\b\h\s\u\4\s\v\q\e\e\s\i\o\x\x\s\7\n\h\7\u\4\7\j\p\q\s\9\y\r\n\u\4\t\w\7\u\z\p\w\n\8\m\e\o\w\e\l\5\y\x\x\m\h\a\t\j\u\n\y\x\q\6\u\j\l\h\j\j\p\r\4\q\b\5\x\n\n\j\4\q\7\q\b\o\6\3\9\w\g\2\1\i\l\0\7\5\g\0\o\a\5\z\l\6\8\t\9\j\i\5\w\b\f\c\o\g\w\d\o\8\3\l\s\6\l\e\j\4\l\c\o\x\0\w\f\m\x\2\9\f\2\q\l\t\n\y\7\5\b\x\x\2\y\s\b\w\2\p\j\z\b\y\m\a\9\f\9\l\6\g\v\8\h\7\s\9\z\i\m\s\3\3\j\m\k\g\w\q\4\6\7\3\1\d\p\4\g\u\3\t\r\k\q\u\k\v\j\o\3\5\e\v\d\d\j\j\7\f\s\p\x\7\o\n\a\d\u\v\c\a\x\n\e\x\o\4\8\8\q\z\d\a\d\6\m\1\9\i\l\b\5\n\5\1 ]] 00:31:54.018 00:45:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:31:54.018 00:45:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:31:54.276 [2024-04-24 00:45:47.890165] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:54.276 [2024-04-24 00:45:47.890856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144310 ] 00:31:54.276 [2024-04-24 00:45:48.068327] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.534 [2024-04-24 00:45:48.306222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.473  Copying: 512/512 [B] (average 250 kBps) 00:31:56.473 00:31:56.473 ************************************ 00:31:56.473 END TEST dd_flags_misc 00:31:56.473 ************************************ 00:31:56.474 00:45:50 -- dd/posix.sh@93 -- # [[ nlib2r87sfyyeuabqrotayxelahcos22sh1n0cwpbr89gvuz6x8r5zw087n1dtbaxwjhucvne69ag984eny0v2pt3utqlagm7v17uvoj2odl8drodzazzy6v5xpvhnyuo5sac8tz8a5g667ygyoyeqfuojglin23q1pniv4imimb7ku1yi937yb6f5a9tbhh89cq6flf8s3iw8mo004b513dmp3thixsum7y4d9f6fbpla6yfeze51bwwq1jj3bembhsu4svqeesioxxs7nh7u47jpqs9yrnu4tw7uzpwn8meowel5yxxmhatjunyxq6ujlhjjpr4qb5xnnj4q7qbo639wg21il075g0oa5zl68t9ji5wbfcogwdo83ls6lej4lcox0wfmx29f2qltny75bxx2ysbw2pjzbyma9f9l6gv8h7s9zims33jmkgwq46731dp4gu3trkqukvjo35evddjj7fspx7onaduvcaxnexo488qzdad6m19ilb5n51 == \n\l\i\b\2\r\8\7\s\f\y\y\e\u\a\b\q\r\o\t\a\y\x\e\l\a\h\c\o\s\2\2\s\h\1\n\0\c\w\p\b\r\8\9\g\v\u\z\6\x\8\r\5\z\w\0\8\7\n\1\d\t\b\a\x\w\j\h\u\c\v\n\e\6\9\a\g\9\8\4\e\n\y\0\v\2\p\t\3\u\t\q\l\a\g\m\7\v\1\7\u\v\o\j\2\o\d\l\8\d\r\o\d\z\a\z\z\y\6\v\5\x\p\v\h\n\y\u\o\5\s\a\c\8\t\z\8\a\5\g\6\6\7\y\g\y\o\y\e\q\f\u\o\j\g\l\i\n\2\3\q\1\p\n\i\v\4\i\m\i\m\b\7\k\u\1\y\i\9\3\7\y\b\6\f\5\a\9\t\b\h\h\8\9\c\q\6\f\l\f\8\s\3\i\w\8\m\o\0\0\4\b\5\1\3\d\m\p\3\t\h\i\x\s\u\m\7\y\4\d\9\f\6\f\b\p\l\a\6\y\f\e\z\e\5\1\b\w\w\q\1\j\j\3\b\e\m\b\h\s\u\4\s\v\q\e\e\s\i\o\x\x\s\7\n\h\7\u\4\7\j\p\q\s\9\y\r\n\u\4\t\w\7\u\z\p\w\n\8\m\e\o\w\e\l\5\y\x\x\m\h\a\t\j\u\n\y\x\q\6\u\j\l\h\j\j\p\r\4\q\b\5\x\n\n\j\4\q\7\q\b\o\6\3\9\w\g\2\1\i\l\0\7\5\g\0\o\a\5\z\l\6\8\t\9\j\i\5\w\b\f\c\o\g\w\d\o\8\3\l\s\6\l\e\j\4\l\c\o\x\0\w\f\m\x\2\9\f\2\q\l\t\n\y\7\5\b\x\x\2\y\s\b\w\2\p\j\z\b\y\m\a\9\f\9\l\6\g\v\8\h\7\s\9\z\i\m\s\3\3\j\m\k\g\w\q\4\6\7\3\1\d\p\4\g\u\3\t\r\k\q\u\k\v\j\o\3\5\e\v\d\d\j\j\7\f\s\p\x\7\o\n\a\d\u\v\c\a\x\n\e\x\o\4\8\8\q\z\d\a\d\6\m\1\9\i\l\b\5\n\5\1 ]] 00:31:56.474 00:31:56.474 real 0m18.587s 00:31:56.474 user 0m15.547s 00:31:56.474 sys 0m1.964s 00:31:56.474 00:45:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:56.474 00:45:50 -- common/autotest_common.sh@10 -- # set +x 00:31:56.474 00:45:50 -- dd/posix.sh@131 -- # tests_forced_aio 00:31:56.474 00:45:50 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:31:56.474 * Second test run, using AIO 00:31:56.474 00:45:50 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:31:56.474 00:45:50 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:31:56.474 00:45:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:56.474 00:45:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:56.474 00:45:50 -- common/autotest_common.sh@10 -- # set +x 00:31:56.474 ************************************ 00:31:56.474 START TEST dd_flag_append_forced_aio 00:31:56.474 ************************************ 00:31:56.474 00:45:50 -- common/autotest_common.sh@1111 -- # append 00:31:56.474 00:45:50 -- dd/posix.sh@16 -- # local dump0 00:31:56.474 00:45:50 -- dd/posix.sh@17 -- # local dump1 00:31:56.474 00:45:50 -- dd/posix.sh@19 -- # gen_bytes 32 00:31:56.474 00:45:50 -- dd/common.sh@98 -- # xtrace_disable 00:31:56.474 00:45:50 -- common/autotest_common.sh@10 -- # set +x 00:31:56.474 00:45:50 -- dd/posix.sh@19 -- # dump0=1unj64zrv1oc5ht009fmnoewpoqdch1c 00:31:56.474 00:45:50 -- dd/posix.sh@20 -- # gen_bytes 32 00:31:56.474 00:45:50 -- dd/common.sh@98 -- # xtrace_disable 00:31:56.474 00:45:50 -- common/autotest_common.sh@10 -- # set +x 00:31:56.732 00:45:50 -- dd/posix.sh@20 -- # dump1=j4ozeoedso0cd7sk8wq4tbtfnbutfunp 00:31:56.732 00:45:50 -- dd/posix.sh@22 -- # printf %s 1unj64zrv1oc5ht009fmnoewpoqdch1c 00:31:56.732 00:45:50 -- dd/posix.sh@23 -- # printf %s j4ozeoedso0cd7sk8wq4tbtfnbutfunp 00:31:56.732 00:45:50 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:31:56.732 [2024-04-24 00:45:50.345405] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:56.732 [2024-04-24 00:45:50.346252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144371 ] 00:31:56.990 [2024-04-24 00:45:50.531212] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.990 [2024-04-24 00:45:50.782553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.013  Copying: 32/32 [B] (average 31 kBps) 00:31:59.013 00:31:59.013 ************************************ 00:31:59.013 END TEST dd_flag_append_forced_aio 00:31:59.013 ************************************ 00:31:59.013 00:45:52 -- dd/posix.sh@27 -- # [[ j4ozeoedso0cd7sk8wq4tbtfnbutfunp1unj64zrv1oc5ht009fmnoewpoqdch1c == \j\4\o\z\e\o\e\d\s\o\0\c\d\7\s\k\8\w\q\4\t\b\t\f\n\b\u\t\f\u\n\p\1\u\n\j\6\4\z\r\v\1\o\c\5\h\t\0\0\9\f\m\n\o\e\w\p\o\q\d\c\h\1\c ]] 00:31:59.013 00:31:59.013 real 0m2.319s 00:31:59.013 user 0m1.914s 00:31:59.013 sys 0m0.278s 00:31:59.013 00:45:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:59.013 00:45:52 -- common/autotest_common.sh@10 -- # set +x 00:31:59.013 00:45:52 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:31:59.013 00:45:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:59.013 00:45:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:59.013 00:45:52 -- common/autotest_common.sh@10 -- # set +x 00:31:59.013 ************************************ 00:31:59.013 START TEST dd_flag_directory_forced_aio 00:31:59.013 ************************************ 00:31:59.013 00:45:52 -- common/autotest_common.sh@1111 -- # directory 00:31:59.013 00:45:52 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:59.013 00:45:52 -- common/autotest_common.sh@638 -- # local es=0 00:31:59.013 00:45:52 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:59.013 00:45:52 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:59.013 00:45:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:59.013 00:45:52 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:59.013 00:45:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:59.013 00:45:52 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:59.013 00:45:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:59.013 00:45:52 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:59.013 00:45:52 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:31:59.013 00:45:52 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:31:59.013 [2024-04-24 00:45:52.745731] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:31:59.013 [2024-04-24 00:45:52.746725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144428 ] 00:31:59.270 [2024-04-24 00:45:52.912979] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.527 [2024-04-24 00:45:53.143621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.790 [2024-04-24 00:45:53.517766] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:59.790 [2024-04-24 00:45:53.518115] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:31:59.790 [2024-04-24 00:45:53.518183] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:00.723 [2024-04-24 00:45:54.430898] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:32:01.291 00:45:54 -- common/autotest_common.sh@641 -- # es=236 00:32:01.291 00:45:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:01.291 00:45:54 -- common/autotest_common.sh@650 -- # es=108 00:32:01.291 00:45:54 -- common/autotest_common.sh@651 -- # case "$es" in 00:32:01.291 00:45:54 -- common/autotest_common.sh@658 -- # es=1 00:32:01.291 00:45:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:01.291 00:45:54 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:32:01.291 00:45:54 -- common/autotest_common.sh@638 -- # local es=0 00:32:01.291 00:45:54 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:32:01.291 00:45:54 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:01.291 00:45:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:01.291 00:45:54 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:01.291 00:45:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:01.291 00:45:54 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:01.291 00:45:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:01.291 00:45:54 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:01.291 00:45:54 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:01.291 00:45:54 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:32:01.291 [2024-04-24 00:45:55.001682] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:01.291 [2024-04-24 00:45:55.002110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144461 ] 00:32:01.549 [2024-04-24 00:45:55.188278] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.807 [2024-04-24 00:45:55.481575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.065 [2024-04-24 00:45:55.836594] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:32:02.065 [2024-04-24 00:45:55.836862] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:32:02.065 [2024-04-24 00:45:55.836941] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:02.998 [2024-04-24 00:45:56.683209] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:32:03.565 00:45:57 -- common/autotest_common.sh@641 -- # es=236 00:32:03.565 00:45:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:03.565 00:45:57 -- common/autotest_common.sh@650 -- # es=108 00:32:03.565 ************************************ 00:32:03.565 END TEST dd_flag_directory_forced_aio 00:32:03.565 ************************************ 00:32:03.565 00:45:57 -- common/autotest_common.sh@651 -- # case "$es" in 00:32:03.565 00:45:57 -- common/autotest_common.sh@658 -- # es=1 00:32:03.565 00:45:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:03.565 00:32:03.565 real 0m4.448s 00:32:03.565 user 0m3.723s 00:32:03.565 sys 0m0.518s 00:32:03.565 00:45:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:03.565 00:45:57 -- common/autotest_common.sh@10 -- # set +x 00:32:03.565 00:45:57 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:32:03.565 00:45:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:03.565 00:45:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:03.565 00:45:57 -- common/autotest_common.sh@10 -- # set +x 00:32:03.565 ************************************ 00:32:03.565 START TEST dd_flag_nofollow_forced_aio 00:32:03.565 ************************************ 00:32:03.565 00:45:57 -- common/autotest_common.sh@1111 -- # nofollow 00:32:03.565 00:45:57 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:32:03.565 00:45:57 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:32:03.565 00:45:57 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:32:03.565 00:45:57 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:32:03.565 00:45:57 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:03.565 00:45:57 -- common/autotest_common.sh@638 -- # local es=0 00:32:03.565 00:45:57 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:03.565 00:45:57 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:03.565 00:45:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:03.565 00:45:57 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:03.565 00:45:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:03.565 00:45:57 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:03.565 00:45:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:03.565 00:45:57 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:03.565 00:45:57 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:03.565 00:45:57 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:03.565 [2024-04-24 00:45:57.308780] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:03.565 [2024-04-24 00:45:57.309213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144519 ] 00:32:03.823 [2024-04-24 00:45:57.497001] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.081 [2024-04-24 00:45:57.776101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.338 [2024-04-24 00:45:58.118238] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:32:04.338 [2024-04-24 00:45:58.118513] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:32:04.338 [2024-04-24 00:45:58.118675] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:05.270 [2024-04-24 00:45:58.987158] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:32:05.836 00:45:59 -- common/autotest_common.sh@641 -- # es=216 00:32:05.836 00:45:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:05.836 00:45:59 -- common/autotest_common.sh@650 -- # es=88 00:32:05.836 00:45:59 -- common/autotest_common.sh@651 -- # case "$es" in 00:32:05.836 00:45:59 -- common/autotest_common.sh@658 -- # es=1 00:32:05.836 00:45:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:05.837 00:45:59 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:32:05.837 00:45:59 -- common/autotest_common.sh@638 -- # local es=0 00:32:05.837 00:45:59 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:32:05.837 00:45:59 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:05.837 00:45:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:05.837 00:45:59 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:05.837 00:45:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:05.837 00:45:59 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:05.837 00:45:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:05.837 00:45:59 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:05.837 00:45:59 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:32:05.837 00:45:59 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:32:05.837 [2024-04-24 00:45:59.529125] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:05.837 [2024-04-24 00:45:59.529585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144552 ] 00:32:06.095 [2024-04-24 00:45:59.710996] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.354 [2024-04-24 00:45:59.942201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.613 [2024-04-24 00:46:00.315453] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:32:06.613 [2024-04-24 00:46:00.315793] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:32:06.613 [2024-04-24 00:46:00.315993] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:07.546 [2024-04-24 00:46:01.208289] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:32:08.112 00:46:01 -- common/autotest_common.sh@641 -- # es=216 00:32:08.112 00:46:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:08.112 00:46:01 -- common/autotest_common.sh@650 -- # es=88 00:32:08.112 00:46:01 -- common/autotest_common.sh@651 -- # case "$es" in 00:32:08.112 00:46:01 -- common/autotest_common.sh@658 -- # es=1 00:32:08.112 00:46:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:08.112 00:46:01 -- dd/posix.sh@46 -- # gen_bytes 512 00:32:08.112 00:46:01 -- dd/common.sh@98 -- # xtrace_disable 00:32:08.112 00:46:01 -- common/autotest_common.sh@10 -- # set +x 00:32:08.112 00:46:01 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:08.112 [2024-04-24 00:46:01.780657] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:08.112 [2024-04-24 00:46:01.780869] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144579 ] 00:32:08.371 [2024-04-24 00:46:01.963456] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.629 [2024-04-24 00:46:02.259647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.266  Copying: 512/512 [B] (average 500 kBps) 00:32:10.266 00:32:10.266 ************************************ 00:32:10.266 END TEST dd_flag_nofollow_forced_aio 00:32:10.266 ************************************ 00:32:10.266 00:46:04 -- dd/posix.sh@49 -- # [[ fgh8m3lzclq9fl4ki4njwmybpcjpllax2b9e90dqdin7uka7iudhlkl1t1aiz71unlpzh9tfiad5ce1ks58kmpbggpmvjrljmdr08rka1wq73f65mjrztq0fdrl4huzdqw3yww1hhqbwwgt2esf6qed3tgl6zyeeijo43cob6lpmxlvkub6lm9g6rccajvccsef2pkrp4ddx5tizfzjhmcqk4u4nb2msoioiebfnkyaeu9mzvq19nc7ceyqs5r7gcnvermo2ip6livb2sqndtycdp7oj509jsby3lx5d7t1tdgoizohr8073us4o9jh614q394pa12qjyxal7oe8ocvi67j8o3tziipxbx0wghxj34md2s6pzncxiw351lo67sschulpy3x587kiyl8w2x1owcypocehqqm2y3jabhup0b012yvkokaj5xj6s83coyud7l0zx2ppyrmswlyyko0lqo01k9exozlg8byqbdfoqzkc05rel3kw53dx2hhk == \f\g\h\8\m\3\l\z\c\l\q\9\f\l\4\k\i\4\n\j\w\m\y\b\p\c\j\p\l\l\a\x\2\b\9\e\9\0\d\q\d\i\n\7\u\k\a\7\i\u\d\h\l\k\l\1\t\1\a\i\z\7\1\u\n\l\p\z\h\9\t\f\i\a\d\5\c\e\1\k\s\5\8\k\m\p\b\g\g\p\m\v\j\r\l\j\m\d\r\0\8\r\k\a\1\w\q\7\3\f\6\5\m\j\r\z\t\q\0\f\d\r\l\4\h\u\z\d\q\w\3\y\w\w\1\h\h\q\b\w\w\g\t\2\e\s\f\6\q\e\d\3\t\g\l\6\z\y\e\e\i\j\o\4\3\c\o\b\6\l\p\m\x\l\v\k\u\b\6\l\m\9\g\6\r\c\c\a\j\v\c\c\s\e\f\2\p\k\r\p\4\d\d\x\5\t\i\z\f\z\j\h\m\c\q\k\4\u\4\n\b\2\m\s\o\i\o\i\e\b\f\n\k\y\a\e\u\9\m\z\v\q\1\9\n\c\7\c\e\y\q\s\5\r\7\g\c\n\v\e\r\m\o\2\i\p\6\l\i\v\b\2\s\q\n\d\t\y\c\d\p\7\o\j\5\0\9\j\s\b\y\3\l\x\5\d\7\t\1\t\d\g\o\i\z\o\h\r\8\0\7\3\u\s\4\o\9\j\h\6\1\4\q\3\9\4\p\a\1\2\q\j\y\x\a\l\7\o\e\8\o\c\v\i\6\7\j\8\o\3\t\z\i\i\p\x\b\x\0\w\g\h\x\j\3\4\m\d\2\s\6\p\z\n\c\x\i\w\3\5\1\l\o\6\7\s\s\c\h\u\l\p\y\3\x\5\8\7\k\i\y\l\8\w\2\x\1\o\w\c\y\p\o\c\e\h\q\q\m\2\y\3\j\a\b\h\u\p\0\b\0\1\2\y\v\k\o\k\a\j\5\x\j\6\s\8\3\c\o\y\u\d\7\l\0\z\x\2\p\p\y\r\m\s\w\l\y\y\k\o\0\l\q\o\0\1\k\9\e\x\o\z\l\g\8\b\y\q\b\d\f\o\q\z\k\c\0\5\r\e\l\3\k\w\5\3\d\x\2\h\h\k ]] 00:32:10.266 00:32:10.266 real 0m6.815s 00:32:10.266 user 0m5.689s 00:32:10.266 sys 0m0.798s 00:32:10.266 00:46:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:10.266 00:46:04 -- common/autotest_common.sh@10 -- # set +x 00:32:10.524 00:46:04 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:32:10.524 00:46:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:10.524 00:46:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:10.524 00:46:04 -- common/autotest_common.sh@10 -- # set +x 00:32:10.524 ************************************ 00:32:10.524 START TEST dd_flag_noatime_forced_aio 00:32:10.524 ************************************ 00:32:10.524 00:46:04 -- common/autotest_common.sh@1111 -- # noatime 00:32:10.524 00:46:04 -- dd/posix.sh@53 -- # local atime_if 00:32:10.524 00:46:04 -- dd/posix.sh@54 -- # local atime_of 00:32:10.524 00:46:04 -- dd/posix.sh@58 -- # gen_bytes 512 00:32:10.524 00:46:04 -- dd/common.sh@98 -- # xtrace_disable 00:32:10.524 00:46:04 -- common/autotest_common.sh@10 -- # set +x 00:32:10.524 00:46:04 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:10.524 00:46:04 -- dd/posix.sh@60 -- # atime_if=1713919562 00:32:10.524 00:46:04 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:10.524 00:46:04 -- dd/posix.sh@61 -- # atime_of=1713919564 00:32:10.524 00:46:04 -- dd/posix.sh@66 -- # sleep 1 00:32:11.458 00:46:05 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:11.458 [2024-04-24 00:46:05.242466] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:11.458 [2024-04-24 00:46:05.242667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144647 ] 00:32:11.717 [2024-04-24 00:46:05.427853] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.975 [2024-04-24 00:46:05.695517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.013  Copying: 512/512 [B] (average 500 kBps) 00:32:14.013 00:32:14.013 00:46:07 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:14.013 00:46:07 -- dd/posix.sh@69 -- # (( atime_if == 1713919562 )) 00:32:14.013 00:46:07 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:14.013 00:46:07 -- dd/posix.sh@70 -- # (( atime_of == 1713919564 )) 00:32:14.013 00:46:07 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:14.013 [2024-04-24 00:46:07.558561] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:14.013 [2024-04-24 00:46:07.558906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144688 ] 00:32:14.013 [2024-04-24 00:46:07.748990] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.270 [2024-04-24 00:46:08.036684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.222  Copying: 512/512 [B] (average 500 kBps) 00:32:16.222 00:32:16.222 00:46:09 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:16.222 ************************************ 00:32:16.222 END TEST dd_flag_noatime_forced_aio 00:32:16.222 ************************************ 00:32:16.222 00:46:09 -- dd/posix.sh@73 -- # (( atime_if < 1713919568 )) 00:32:16.222 00:32:16.222 real 0m5.779s 00:32:16.222 user 0m3.989s 00:32:16.222 sys 0m0.513s 00:32:16.222 00:46:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:16.222 00:46:09 -- common/autotest_common.sh@10 -- # set +x 00:32:16.222 00:46:09 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:32:16.222 00:46:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:16.222 00:46:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:16.222 00:46:09 -- common/autotest_common.sh@10 -- # set +x 00:32:16.222 ************************************ 00:32:16.222 START TEST dd_flags_misc_forced_aio 00:32:16.222 ************************************ 00:32:16.222 00:46:09 -- common/autotest_common.sh@1111 -- # io 00:32:16.222 00:46:09 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:32:16.222 00:46:09 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:32:16.222 00:46:09 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:32:16.222 00:46:09 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:32:16.222 00:46:09 -- dd/posix.sh@86 -- # gen_bytes 512 00:32:16.222 00:46:09 -- dd/common.sh@98 -- # xtrace_disable 00:32:16.222 00:46:09 -- common/autotest_common.sh@10 -- # set +x 00:32:16.222 00:46:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:32:16.222 00:46:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:32:16.479 [2024-04-24 00:46:10.077688] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:16.479 [2024-04-24 00:46:10.077952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144740 ] 00:32:16.479 [2024-04-24 00:46:10.258178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.737 [2024-04-24 00:46:10.513976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.735  Copying: 512/512 [B] (average 500 kBps) 00:32:18.735 00:32:18.735 00:46:12 -- dd/posix.sh@93 -- # [[ amuo4wqygg5yu73z49zcbmk64vaiae9rsdgfuwy4dycc96mzx2hl7zz2qj9ma0eue8r8c98gecl4kt92xvqop826ngtjd9r0qgpqg6karpyzssobuaxa5yknpgfslvyczabufma7n20ui9ijy331atkhlwlkiap9u1pmhv6oopaq290vcprms6qwtyftxmh3h6gegc5zyc0x6rqe8gx2ya6el3deaff16iwt2gdb903abagmlv2sfbv6i9ixyduy5h5y9x2llk9nth5vmzwd9xwnx6sogux25letd6ggdu8ybzwgfeyq2b1krrq4zrkt3vv10wsamiohw1nuz532614pi28efue4ztn1wh8cenikx9zjuslfnl2y7tzgczr56c4pkits99r5ah957hvmk2np79evjr8oyajy2hiu8me4tylyazdc86e0trljegwrp83p802qdhys0hs1mogkl8edopj2devej1i8yc7nm6fmu3vgufa20slvh6utkpp0 == \a\m\u\o\4\w\q\y\g\g\5\y\u\7\3\z\4\9\z\c\b\m\k\6\4\v\a\i\a\e\9\r\s\d\g\f\u\w\y\4\d\y\c\c\9\6\m\z\x\2\h\l\7\z\z\2\q\j\9\m\a\0\e\u\e\8\r\8\c\9\8\g\e\c\l\4\k\t\9\2\x\v\q\o\p\8\2\6\n\g\t\j\d\9\r\0\q\g\p\q\g\6\k\a\r\p\y\z\s\s\o\b\u\a\x\a\5\y\k\n\p\g\f\s\l\v\y\c\z\a\b\u\f\m\a\7\n\2\0\u\i\9\i\j\y\3\3\1\a\t\k\h\l\w\l\k\i\a\p\9\u\1\p\m\h\v\6\o\o\p\a\q\2\9\0\v\c\p\r\m\s\6\q\w\t\y\f\t\x\m\h\3\h\6\g\e\g\c\5\z\y\c\0\x\6\r\q\e\8\g\x\2\y\a\6\e\l\3\d\e\a\f\f\1\6\i\w\t\2\g\d\b\9\0\3\a\b\a\g\m\l\v\2\s\f\b\v\6\i\9\i\x\y\d\u\y\5\h\5\y\9\x\2\l\l\k\9\n\t\h\5\v\m\z\w\d\9\x\w\n\x\6\s\o\g\u\x\2\5\l\e\t\d\6\g\g\d\u\8\y\b\z\w\g\f\e\y\q\2\b\1\k\r\r\q\4\z\r\k\t\3\v\v\1\0\w\s\a\m\i\o\h\w\1\n\u\z\5\3\2\6\1\4\p\i\2\8\e\f\u\e\4\z\t\n\1\w\h\8\c\e\n\i\k\x\9\z\j\u\s\l\f\n\l\2\y\7\t\z\g\c\z\r\5\6\c\4\p\k\i\t\s\9\9\r\5\a\h\9\5\7\h\v\m\k\2\n\p\7\9\e\v\j\r\8\o\y\a\j\y\2\h\i\u\8\m\e\4\t\y\l\y\a\z\d\c\8\6\e\0\t\r\l\j\e\g\w\r\p\8\3\p\8\0\2\q\d\h\y\s\0\h\s\1\m\o\g\k\l\8\e\d\o\p\j\2\d\e\v\e\j\1\i\8\y\c\7\n\m\6\f\m\u\3\v\g\u\f\a\2\0\s\l\v\h\6\u\t\k\p\p\0 ]] 00:32:18.735 00:46:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:32:18.735 00:46:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:32:18.735 [2024-04-24 00:46:12.433886] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:18.735 [2024-04-24 00:46:12.434081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144773 ] 00:32:18.992 [2024-04-24 00:46:12.622854] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.249 [2024-04-24 00:46:12.865139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.879  Copying: 512/512 [B] (average 500 kBps) 00:32:20.879 00:32:20.880 00:46:14 -- dd/posix.sh@93 -- # [[ amuo4wqygg5yu73z49zcbmk64vaiae9rsdgfuwy4dycc96mzx2hl7zz2qj9ma0eue8r8c98gecl4kt92xvqop826ngtjd9r0qgpqg6karpyzssobuaxa5yknpgfslvyczabufma7n20ui9ijy331atkhlwlkiap9u1pmhv6oopaq290vcprms6qwtyftxmh3h6gegc5zyc0x6rqe8gx2ya6el3deaff16iwt2gdb903abagmlv2sfbv6i9ixyduy5h5y9x2llk9nth5vmzwd9xwnx6sogux25letd6ggdu8ybzwgfeyq2b1krrq4zrkt3vv10wsamiohw1nuz532614pi28efue4ztn1wh8cenikx9zjuslfnl2y7tzgczr56c4pkits99r5ah957hvmk2np79evjr8oyajy2hiu8me4tylyazdc86e0trljegwrp83p802qdhys0hs1mogkl8edopj2devej1i8yc7nm6fmu3vgufa20slvh6utkpp0 == \a\m\u\o\4\w\q\y\g\g\5\y\u\7\3\z\4\9\z\c\b\m\k\6\4\v\a\i\a\e\9\r\s\d\g\f\u\w\y\4\d\y\c\c\9\6\m\z\x\2\h\l\7\z\z\2\q\j\9\m\a\0\e\u\e\8\r\8\c\9\8\g\e\c\l\4\k\t\9\2\x\v\q\o\p\8\2\6\n\g\t\j\d\9\r\0\q\g\p\q\g\6\k\a\r\p\y\z\s\s\o\b\u\a\x\a\5\y\k\n\p\g\f\s\l\v\y\c\z\a\b\u\f\m\a\7\n\2\0\u\i\9\i\j\y\3\3\1\a\t\k\h\l\w\l\k\i\a\p\9\u\1\p\m\h\v\6\o\o\p\a\q\2\9\0\v\c\p\r\m\s\6\q\w\t\y\f\t\x\m\h\3\h\6\g\e\g\c\5\z\y\c\0\x\6\r\q\e\8\g\x\2\y\a\6\e\l\3\d\e\a\f\f\1\6\i\w\t\2\g\d\b\9\0\3\a\b\a\g\m\l\v\2\s\f\b\v\6\i\9\i\x\y\d\u\y\5\h\5\y\9\x\2\l\l\k\9\n\t\h\5\v\m\z\w\d\9\x\w\n\x\6\s\o\g\u\x\2\5\l\e\t\d\6\g\g\d\u\8\y\b\z\w\g\f\e\y\q\2\b\1\k\r\r\q\4\z\r\k\t\3\v\v\1\0\w\s\a\m\i\o\h\w\1\n\u\z\5\3\2\6\1\4\p\i\2\8\e\f\u\e\4\z\t\n\1\w\h\8\c\e\n\i\k\x\9\z\j\u\s\l\f\n\l\2\y\7\t\z\g\c\z\r\5\6\c\4\p\k\i\t\s\9\9\r\5\a\h\9\5\7\h\v\m\k\2\n\p\7\9\e\v\j\r\8\o\y\a\j\y\2\h\i\u\8\m\e\4\t\y\l\y\a\z\d\c\8\6\e\0\t\r\l\j\e\g\w\r\p\8\3\p\8\0\2\q\d\h\y\s\0\h\s\1\m\o\g\k\l\8\e\d\o\p\j\2\d\e\v\e\j\1\i\8\y\c\7\n\m\6\f\m\u\3\v\g\u\f\a\2\0\s\l\v\h\6\u\t\k\p\p\0 ]] 00:32:20.880 00:46:14 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:32:20.880 00:46:14 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:32:21.137 [2024-04-24 00:46:14.680160] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:21.137 [2024-04-24 00:46:14.680357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144804 ] 00:32:21.137 [2024-04-24 00:46:14.866063] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.395 [2024-04-24 00:46:15.157620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.370  Copying: 512/512 [B] (average 166 kBps) 00:32:23.370 00:32:23.370 00:46:16 -- dd/posix.sh@93 -- # [[ amuo4wqygg5yu73z49zcbmk64vaiae9rsdgfuwy4dycc96mzx2hl7zz2qj9ma0eue8r8c98gecl4kt92xvqop826ngtjd9r0qgpqg6karpyzssobuaxa5yknpgfslvyczabufma7n20ui9ijy331atkhlwlkiap9u1pmhv6oopaq290vcprms6qwtyftxmh3h6gegc5zyc0x6rqe8gx2ya6el3deaff16iwt2gdb903abagmlv2sfbv6i9ixyduy5h5y9x2llk9nth5vmzwd9xwnx6sogux25letd6ggdu8ybzwgfeyq2b1krrq4zrkt3vv10wsamiohw1nuz532614pi28efue4ztn1wh8cenikx9zjuslfnl2y7tzgczr56c4pkits99r5ah957hvmk2np79evjr8oyajy2hiu8me4tylyazdc86e0trljegwrp83p802qdhys0hs1mogkl8edopj2devej1i8yc7nm6fmu3vgufa20slvh6utkpp0 == \a\m\u\o\4\w\q\y\g\g\5\y\u\7\3\z\4\9\z\c\b\m\k\6\4\v\a\i\a\e\9\r\s\d\g\f\u\w\y\4\d\y\c\c\9\6\m\z\x\2\h\l\7\z\z\2\q\j\9\m\a\0\e\u\e\8\r\8\c\9\8\g\e\c\l\4\k\t\9\2\x\v\q\o\p\8\2\6\n\g\t\j\d\9\r\0\q\g\p\q\g\6\k\a\r\p\y\z\s\s\o\b\u\a\x\a\5\y\k\n\p\g\f\s\l\v\y\c\z\a\b\u\f\m\a\7\n\2\0\u\i\9\i\j\y\3\3\1\a\t\k\h\l\w\l\k\i\a\p\9\u\1\p\m\h\v\6\o\o\p\a\q\2\9\0\v\c\p\r\m\s\6\q\w\t\y\f\t\x\m\h\3\h\6\g\e\g\c\5\z\y\c\0\x\6\r\q\e\8\g\x\2\y\a\6\e\l\3\d\e\a\f\f\1\6\i\w\t\2\g\d\b\9\0\3\a\b\a\g\m\l\v\2\s\f\b\v\6\i\9\i\x\y\d\u\y\5\h\5\y\9\x\2\l\l\k\9\n\t\h\5\v\m\z\w\d\9\x\w\n\x\6\s\o\g\u\x\2\5\l\e\t\d\6\g\g\d\u\8\y\b\z\w\g\f\e\y\q\2\b\1\k\r\r\q\4\z\r\k\t\3\v\v\1\0\w\s\a\m\i\o\h\w\1\n\u\z\5\3\2\6\1\4\p\i\2\8\e\f\u\e\4\z\t\n\1\w\h\8\c\e\n\i\k\x\9\z\j\u\s\l\f\n\l\2\y\7\t\z\g\c\z\r\5\6\c\4\p\k\i\t\s\9\9\r\5\a\h\9\5\7\h\v\m\k\2\n\p\7\9\e\v\j\r\8\o\y\a\j\y\2\h\i\u\8\m\e\4\t\y\l\y\a\z\d\c\8\6\e\0\t\r\l\j\e\g\w\r\p\8\3\p\8\0\2\q\d\h\y\s\0\h\s\1\m\o\g\k\l\8\e\d\o\p\j\2\d\e\v\e\j\1\i\8\y\c\7\n\m\6\f\m\u\3\v\g\u\f\a\2\0\s\l\v\h\6\u\t\k\p\p\0 ]] 00:32:23.370 00:46:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:32:23.370 00:46:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:32:23.370 [2024-04-24 00:46:17.029444] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:23.371 [2024-04-24 00:46:17.029751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144834 ] 00:32:23.628 [2024-04-24 00:46:17.209507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.886 [2024-04-24 00:46:17.474537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.519  Copying: 512/512 [B] (average 166 kBps) 00:32:25.519 00:32:25.519 00:46:19 -- dd/posix.sh@93 -- # [[ amuo4wqygg5yu73z49zcbmk64vaiae9rsdgfuwy4dycc96mzx2hl7zz2qj9ma0eue8r8c98gecl4kt92xvqop826ngtjd9r0qgpqg6karpyzssobuaxa5yknpgfslvyczabufma7n20ui9ijy331atkhlwlkiap9u1pmhv6oopaq290vcprms6qwtyftxmh3h6gegc5zyc0x6rqe8gx2ya6el3deaff16iwt2gdb903abagmlv2sfbv6i9ixyduy5h5y9x2llk9nth5vmzwd9xwnx6sogux25letd6ggdu8ybzwgfeyq2b1krrq4zrkt3vv10wsamiohw1nuz532614pi28efue4ztn1wh8cenikx9zjuslfnl2y7tzgczr56c4pkits99r5ah957hvmk2np79evjr8oyajy2hiu8me4tylyazdc86e0trljegwrp83p802qdhys0hs1mogkl8edopj2devej1i8yc7nm6fmu3vgufa20slvh6utkpp0 == \a\m\u\o\4\w\q\y\g\g\5\y\u\7\3\z\4\9\z\c\b\m\k\6\4\v\a\i\a\e\9\r\s\d\g\f\u\w\y\4\d\y\c\c\9\6\m\z\x\2\h\l\7\z\z\2\q\j\9\m\a\0\e\u\e\8\r\8\c\9\8\g\e\c\l\4\k\t\9\2\x\v\q\o\p\8\2\6\n\g\t\j\d\9\r\0\q\g\p\q\g\6\k\a\r\p\y\z\s\s\o\b\u\a\x\a\5\y\k\n\p\g\f\s\l\v\y\c\z\a\b\u\f\m\a\7\n\2\0\u\i\9\i\j\y\3\3\1\a\t\k\h\l\w\l\k\i\a\p\9\u\1\p\m\h\v\6\o\o\p\a\q\2\9\0\v\c\p\r\m\s\6\q\w\t\y\f\t\x\m\h\3\h\6\g\e\g\c\5\z\y\c\0\x\6\r\q\e\8\g\x\2\y\a\6\e\l\3\d\e\a\f\f\1\6\i\w\t\2\g\d\b\9\0\3\a\b\a\g\m\l\v\2\s\f\b\v\6\i\9\i\x\y\d\u\y\5\h\5\y\9\x\2\l\l\k\9\n\t\h\5\v\m\z\w\d\9\x\w\n\x\6\s\o\g\u\x\2\5\l\e\t\d\6\g\g\d\u\8\y\b\z\w\g\f\e\y\q\2\b\1\k\r\r\q\4\z\r\k\t\3\v\v\1\0\w\s\a\m\i\o\h\w\1\n\u\z\5\3\2\6\1\4\p\i\2\8\e\f\u\e\4\z\t\n\1\w\h\8\c\e\n\i\k\x\9\z\j\u\s\l\f\n\l\2\y\7\t\z\g\c\z\r\5\6\c\4\p\k\i\t\s\9\9\r\5\a\h\9\5\7\h\v\m\k\2\n\p\7\9\e\v\j\r\8\o\y\a\j\y\2\h\i\u\8\m\e\4\t\y\l\y\a\z\d\c\8\6\e\0\t\r\l\j\e\g\w\r\p\8\3\p\8\0\2\q\d\h\y\s\0\h\s\1\m\o\g\k\l\8\e\d\o\p\j\2\d\e\v\e\j\1\i\8\y\c\7\n\m\6\f\m\u\3\v\g\u\f\a\2\0\s\l\v\h\6\u\t\k\p\p\0 ]] 00:32:25.519 00:46:19 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:32:25.519 00:46:19 -- dd/posix.sh@86 -- # gen_bytes 512 00:32:25.519 00:46:19 -- dd/common.sh@98 -- # xtrace_disable 00:32:25.519 00:46:19 -- common/autotest_common.sh@10 -- # set +x 00:32:25.519 00:46:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:32:25.519 00:46:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:32:25.777 [2024-04-24 00:46:19.348524] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:25.777 [2024-04-24 00:46:19.348925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144865 ] 00:32:25.777 [2024-04-24 00:46:19.507658] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.035 [2024-04-24 00:46:19.751672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.976  Copying: 512/512 [B] (average 500 kBps) 00:32:27.976 00:32:27.976 00:46:21 -- dd/posix.sh@93 -- # [[ gnvj9eo3b6l4xuoq9t8dnckbvljruer69gtvovb8p3ybonlv372vycnqavxsimbapaxv5nkxnrcqzmkibrzlont7le6isach1qi0pc5xws3qlwvuugo5tgzmyqjiouynb1p7gs6xb2o7nffigvrk51q6u7h2h0w56zpcbh2t1jxj006nu54g60xd8d519kc2yht7h1w31c5whne7swmp0qgr6ki0gisp5w8qhu7afhzqkg82vsg5redhi6hmd7b7o85lgsaoay2uu203ep47fmcrsrqmy7nxv5epdzawnaogh8hj3cs6a1sw7eteepoygmzzv2nr04mvgndvgl8mvfmxg0zjt9vfz4btpfz19tzjbcemikerh3nyav1tducbes2511yqp08lszszst11fmujcdfk3tj54y632nb1f3t443qwoesixowjvzuz5t1y6r73b7kel4yruqem8sjmrapfqzomo2tu7v5x3g2xxlr1xe6hleqy6r247g96sfva == \g\n\v\j\9\e\o\3\b\6\l\4\x\u\o\q\9\t\8\d\n\c\k\b\v\l\j\r\u\e\r\6\9\g\t\v\o\v\b\8\p\3\y\b\o\n\l\v\3\7\2\v\y\c\n\q\a\v\x\s\i\m\b\a\p\a\x\v\5\n\k\x\n\r\c\q\z\m\k\i\b\r\z\l\o\n\t\7\l\e\6\i\s\a\c\h\1\q\i\0\p\c\5\x\w\s\3\q\l\w\v\u\u\g\o\5\t\g\z\m\y\q\j\i\o\u\y\n\b\1\p\7\g\s\6\x\b\2\o\7\n\f\f\i\g\v\r\k\5\1\q\6\u\7\h\2\h\0\w\5\6\z\p\c\b\h\2\t\1\j\x\j\0\0\6\n\u\5\4\g\6\0\x\d\8\d\5\1\9\k\c\2\y\h\t\7\h\1\w\3\1\c\5\w\h\n\e\7\s\w\m\p\0\q\g\r\6\k\i\0\g\i\s\p\5\w\8\q\h\u\7\a\f\h\z\q\k\g\8\2\v\s\g\5\r\e\d\h\i\6\h\m\d\7\b\7\o\8\5\l\g\s\a\o\a\y\2\u\u\2\0\3\e\p\4\7\f\m\c\r\s\r\q\m\y\7\n\x\v\5\e\p\d\z\a\w\n\a\o\g\h\8\h\j\3\c\s\6\a\1\s\w\7\e\t\e\e\p\o\y\g\m\z\z\v\2\n\r\0\4\m\v\g\n\d\v\g\l\8\m\v\f\m\x\g\0\z\j\t\9\v\f\z\4\b\t\p\f\z\1\9\t\z\j\b\c\e\m\i\k\e\r\h\3\n\y\a\v\1\t\d\u\c\b\e\s\2\5\1\1\y\q\p\0\8\l\s\z\s\z\s\t\1\1\f\m\u\j\c\d\f\k\3\t\j\5\4\y\6\3\2\n\b\1\f\3\t\4\4\3\q\w\o\e\s\i\x\o\w\j\v\z\u\z\5\t\1\y\6\r\7\3\b\7\k\e\l\4\y\r\u\q\e\m\8\s\j\m\r\a\p\f\q\z\o\m\o\2\t\u\7\v\5\x\3\g\2\x\x\l\r\1\x\e\6\h\l\e\q\y\6\r\2\4\7\g\9\6\s\f\v\a ]] 00:32:27.976 00:46:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:32:27.976 00:46:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:32:27.976 [2024-04-24 00:46:21.677929] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:27.976 [2024-04-24 00:46:21.678602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144896 ] 00:32:28.234 [2024-04-24 00:46:21.861796] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.491 [2024-04-24 00:46:22.085284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.122  Copying: 512/512 [B] (average 500 kBps) 00:32:30.122 00:32:30.122 00:46:23 -- dd/posix.sh@93 -- # [[ gnvj9eo3b6l4xuoq9t8dnckbvljruer69gtvovb8p3ybonlv372vycnqavxsimbapaxv5nkxnrcqzmkibrzlont7le6isach1qi0pc5xws3qlwvuugo5tgzmyqjiouynb1p7gs6xb2o7nffigvrk51q6u7h2h0w56zpcbh2t1jxj006nu54g60xd8d519kc2yht7h1w31c5whne7swmp0qgr6ki0gisp5w8qhu7afhzqkg82vsg5redhi6hmd7b7o85lgsaoay2uu203ep47fmcrsrqmy7nxv5epdzawnaogh8hj3cs6a1sw7eteepoygmzzv2nr04mvgndvgl8mvfmxg0zjt9vfz4btpfz19tzjbcemikerh3nyav1tducbes2511yqp08lszszst11fmujcdfk3tj54y632nb1f3t443qwoesixowjvzuz5t1y6r73b7kel4yruqem8sjmrapfqzomo2tu7v5x3g2xxlr1xe6hleqy6r247g96sfva == \g\n\v\j\9\e\o\3\b\6\l\4\x\u\o\q\9\t\8\d\n\c\k\b\v\l\j\r\u\e\r\6\9\g\t\v\o\v\b\8\p\3\y\b\o\n\l\v\3\7\2\v\y\c\n\q\a\v\x\s\i\m\b\a\p\a\x\v\5\n\k\x\n\r\c\q\z\m\k\i\b\r\z\l\o\n\t\7\l\e\6\i\s\a\c\h\1\q\i\0\p\c\5\x\w\s\3\q\l\w\v\u\u\g\o\5\t\g\z\m\y\q\j\i\o\u\y\n\b\1\p\7\g\s\6\x\b\2\o\7\n\f\f\i\g\v\r\k\5\1\q\6\u\7\h\2\h\0\w\5\6\z\p\c\b\h\2\t\1\j\x\j\0\0\6\n\u\5\4\g\6\0\x\d\8\d\5\1\9\k\c\2\y\h\t\7\h\1\w\3\1\c\5\w\h\n\e\7\s\w\m\p\0\q\g\r\6\k\i\0\g\i\s\p\5\w\8\q\h\u\7\a\f\h\z\q\k\g\8\2\v\s\g\5\r\e\d\h\i\6\h\m\d\7\b\7\o\8\5\l\g\s\a\o\a\y\2\u\u\2\0\3\e\p\4\7\f\m\c\r\s\r\q\m\y\7\n\x\v\5\e\p\d\z\a\w\n\a\o\g\h\8\h\j\3\c\s\6\a\1\s\w\7\e\t\e\e\p\o\y\g\m\z\z\v\2\n\r\0\4\m\v\g\n\d\v\g\l\8\m\v\f\m\x\g\0\z\j\t\9\v\f\z\4\b\t\p\f\z\1\9\t\z\j\b\c\e\m\i\k\e\r\h\3\n\y\a\v\1\t\d\u\c\b\e\s\2\5\1\1\y\q\p\0\8\l\s\z\s\z\s\t\1\1\f\m\u\j\c\d\f\k\3\t\j\5\4\y\6\3\2\n\b\1\f\3\t\4\4\3\q\w\o\e\s\i\x\o\w\j\v\z\u\z\5\t\1\y\6\r\7\3\b\7\k\e\l\4\y\r\u\q\e\m\8\s\j\m\r\a\p\f\q\z\o\m\o\2\t\u\7\v\5\x\3\g\2\x\x\l\r\1\x\e\6\h\l\e\q\y\6\r\2\4\7\g\9\6\s\f\v\a ]] 00:32:30.122 00:46:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:32:30.122 00:46:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:32:30.381 [2024-04-24 00:46:23.948559] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:30.381 [2024-04-24 00:46:23.948980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144926 ] 00:32:30.381 [2024-04-24 00:46:24.130912] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.640 [2024-04-24 00:46:24.427133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.585  Copying: 512/512 [B] (average 250 kBps) 00:32:32.585 00:32:32.585 00:46:26 -- dd/posix.sh@93 -- # [[ gnvj9eo3b6l4xuoq9t8dnckbvljruer69gtvovb8p3ybonlv372vycnqavxsimbapaxv5nkxnrcqzmkibrzlont7le6isach1qi0pc5xws3qlwvuugo5tgzmyqjiouynb1p7gs6xb2o7nffigvrk51q6u7h2h0w56zpcbh2t1jxj006nu54g60xd8d519kc2yht7h1w31c5whne7swmp0qgr6ki0gisp5w8qhu7afhzqkg82vsg5redhi6hmd7b7o85lgsaoay2uu203ep47fmcrsrqmy7nxv5epdzawnaogh8hj3cs6a1sw7eteepoygmzzv2nr04mvgndvgl8mvfmxg0zjt9vfz4btpfz19tzjbcemikerh3nyav1tducbes2511yqp08lszszst11fmujcdfk3tj54y632nb1f3t443qwoesixowjvzuz5t1y6r73b7kel4yruqem8sjmrapfqzomo2tu7v5x3g2xxlr1xe6hleqy6r247g96sfva == \g\n\v\j\9\e\o\3\b\6\l\4\x\u\o\q\9\t\8\d\n\c\k\b\v\l\j\r\u\e\r\6\9\g\t\v\o\v\b\8\p\3\y\b\o\n\l\v\3\7\2\v\y\c\n\q\a\v\x\s\i\m\b\a\p\a\x\v\5\n\k\x\n\r\c\q\z\m\k\i\b\r\z\l\o\n\t\7\l\e\6\i\s\a\c\h\1\q\i\0\p\c\5\x\w\s\3\q\l\w\v\u\u\g\o\5\t\g\z\m\y\q\j\i\o\u\y\n\b\1\p\7\g\s\6\x\b\2\o\7\n\f\f\i\g\v\r\k\5\1\q\6\u\7\h\2\h\0\w\5\6\z\p\c\b\h\2\t\1\j\x\j\0\0\6\n\u\5\4\g\6\0\x\d\8\d\5\1\9\k\c\2\y\h\t\7\h\1\w\3\1\c\5\w\h\n\e\7\s\w\m\p\0\q\g\r\6\k\i\0\g\i\s\p\5\w\8\q\h\u\7\a\f\h\z\q\k\g\8\2\v\s\g\5\r\e\d\h\i\6\h\m\d\7\b\7\o\8\5\l\g\s\a\o\a\y\2\u\u\2\0\3\e\p\4\7\f\m\c\r\s\r\q\m\y\7\n\x\v\5\e\p\d\z\a\w\n\a\o\g\h\8\h\j\3\c\s\6\a\1\s\w\7\e\t\e\e\p\o\y\g\m\z\z\v\2\n\r\0\4\m\v\g\n\d\v\g\l\8\m\v\f\m\x\g\0\z\j\t\9\v\f\z\4\b\t\p\f\z\1\9\t\z\j\b\c\e\m\i\k\e\r\h\3\n\y\a\v\1\t\d\u\c\b\e\s\2\5\1\1\y\q\p\0\8\l\s\z\s\z\s\t\1\1\f\m\u\j\c\d\f\k\3\t\j\5\4\y\6\3\2\n\b\1\f\3\t\4\4\3\q\w\o\e\s\i\x\o\w\j\v\z\u\z\5\t\1\y\6\r\7\3\b\7\k\e\l\4\y\r\u\q\e\m\8\s\j\m\r\a\p\f\q\z\o\m\o\2\t\u\7\v\5\x\3\g\2\x\x\l\r\1\x\e\6\h\l\e\q\y\6\r\2\4\7\g\9\6\s\f\v\a ]] 00:32:32.585 00:46:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:32:32.585 00:46:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:32:32.843 [2024-04-24 00:46:26.421704] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:32.843 [2024-04-24 00:46:26.422097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144955 ] 00:32:32.843 [2024-04-24 00:46:26.589681] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.408 [2024-04-24 00:46:26.896391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.082  Copying: 512/512 [B] (average 250 kBps) 00:32:35.082 00:32:35.082 ************************************ 00:32:35.082 END TEST dd_flags_misc_forced_aio 00:32:35.082 ************************************ 00:32:35.082 00:46:28 -- dd/posix.sh@93 -- # [[ gnvj9eo3b6l4xuoq9t8dnckbvljruer69gtvovb8p3ybonlv372vycnqavxsimbapaxv5nkxnrcqzmkibrzlont7le6isach1qi0pc5xws3qlwvuugo5tgzmyqjiouynb1p7gs6xb2o7nffigvrk51q6u7h2h0w56zpcbh2t1jxj006nu54g60xd8d519kc2yht7h1w31c5whne7swmp0qgr6ki0gisp5w8qhu7afhzqkg82vsg5redhi6hmd7b7o85lgsaoay2uu203ep47fmcrsrqmy7nxv5epdzawnaogh8hj3cs6a1sw7eteepoygmzzv2nr04mvgndvgl8mvfmxg0zjt9vfz4btpfz19tzjbcemikerh3nyav1tducbes2511yqp08lszszst11fmujcdfk3tj54y632nb1f3t443qwoesixowjvzuz5t1y6r73b7kel4yruqem8sjmrapfqzomo2tu7v5x3g2xxlr1xe6hleqy6r247g96sfva == \g\n\v\j\9\e\o\3\b\6\l\4\x\u\o\q\9\t\8\d\n\c\k\b\v\l\j\r\u\e\r\6\9\g\t\v\o\v\b\8\p\3\y\b\o\n\l\v\3\7\2\v\y\c\n\q\a\v\x\s\i\m\b\a\p\a\x\v\5\n\k\x\n\r\c\q\z\m\k\i\b\r\z\l\o\n\t\7\l\e\6\i\s\a\c\h\1\q\i\0\p\c\5\x\w\s\3\q\l\w\v\u\u\g\o\5\t\g\z\m\y\q\j\i\o\u\y\n\b\1\p\7\g\s\6\x\b\2\o\7\n\f\f\i\g\v\r\k\5\1\q\6\u\7\h\2\h\0\w\5\6\z\p\c\b\h\2\t\1\j\x\j\0\0\6\n\u\5\4\g\6\0\x\d\8\d\5\1\9\k\c\2\y\h\t\7\h\1\w\3\1\c\5\w\h\n\e\7\s\w\m\p\0\q\g\r\6\k\i\0\g\i\s\p\5\w\8\q\h\u\7\a\f\h\z\q\k\g\8\2\v\s\g\5\r\e\d\h\i\6\h\m\d\7\b\7\o\8\5\l\g\s\a\o\a\y\2\u\u\2\0\3\e\p\4\7\f\m\c\r\s\r\q\m\y\7\n\x\v\5\e\p\d\z\a\w\n\a\o\g\h\8\h\j\3\c\s\6\a\1\s\w\7\e\t\e\e\p\o\y\g\m\z\z\v\2\n\r\0\4\m\v\g\n\d\v\g\l\8\m\v\f\m\x\g\0\z\j\t\9\v\f\z\4\b\t\p\f\z\1\9\t\z\j\b\c\e\m\i\k\e\r\h\3\n\y\a\v\1\t\d\u\c\b\e\s\2\5\1\1\y\q\p\0\8\l\s\z\s\z\s\t\1\1\f\m\u\j\c\d\f\k\3\t\j\5\4\y\6\3\2\n\b\1\f\3\t\4\4\3\q\w\o\e\s\i\x\o\w\j\v\z\u\z\5\t\1\y\6\r\7\3\b\7\k\e\l\4\y\r\u\q\e\m\8\s\j\m\r\a\p\f\q\z\o\m\o\2\t\u\7\v\5\x\3\g\2\x\x\l\r\1\x\e\6\h\l\e\q\y\6\r\2\4\7\g\9\6\s\f\v\a ]] 00:32:35.082 00:32:35.082 real 0m18.735s 00:32:35.082 user 0m15.636s 00:32:35.082 sys 0m2.036s 00:32:35.082 00:46:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:35.082 00:46:28 -- common/autotest_common.sh@10 -- # set +x 00:32:35.082 00:46:28 -- dd/posix.sh@1 -- # cleanup 00:32:35.082 00:46:28 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:32:35.082 00:46:28 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:32:35.082 ************************************ 00:32:35.082 END TEST spdk_dd_posix 00:32:35.082 ************************************ 00:32:35.082 00:32:35.082 real 1m16.754s 00:32:35.082 user 1m2.287s 00:32:35.082 sys 0m8.478s 00:32:35.082 00:46:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:35.082 00:46:28 -- common/autotest_common.sh@10 -- # set +x 00:32:35.082 00:46:28 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:32:35.082 00:46:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:35.082 00:46:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:35.082 00:46:28 -- common/autotest_common.sh@10 -- # set +x 00:32:35.082 ************************************ 00:32:35.082 START TEST spdk_dd_malloc 00:32:35.082 ************************************ 00:32:35.082 00:46:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:32:35.340 * Looking for test storage... 00:32:35.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:32:35.340 00:46:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:35.340 00:46:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:35.340 00:46:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:35.340 00:46:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:35.340 00:46:28 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:35.340 00:46:28 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:35.340 00:46:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:35.340 00:46:28 -- paths/export.sh@5 -- # export PATH 00:32:35.340 00:46:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:35.340 00:46:28 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:32:35.340 00:46:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:35.340 00:46:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:35.340 00:46:28 -- common/autotest_common.sh@10 -- # set +x 00:32:35.340 ************************************ 00:32:35.340 START TEST dd_malloc_copy 00:32:35.340 ************************************ 00:32:35.340 00:46:29 -- common/autotest_common.sh@1111 -- # malloc_copy 00:32:35.340 00:46:29 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:32:35.340 00:46:29 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:32:35.340 00:46:29 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:32:35.340 00:46:29 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:32:35.340 00:46:29 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:32:35.340 00:46:29 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:32:35.340 00:46:29 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:32:35.340 00:46:29 -- dd/malloc.sh@28 -- # gen_conf 00:32:35.340 00:46:29 -- dd/common.sh@31 -- # xtrace_disable 00:32:35.340 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:32:35.340 { 00:32:35.340 "subsystems": [ 00:32:35.340 { 00:32:35.340 "subsystem": "bdev", 00:32:35.340 "config": [ 00:32:35.340 { 00:32:35.340 "params": { 00:32:35.340 "block_size": 512, 00:32:35.340 "num_blocks": 1048576, 00:32:35.340 "name": "malloc0" 00:32:35.340 }, 00:32:35.340 "method": "bdev_malloc_create" 00:32:35.340 }, 00:32:35.340 { 00:32:35.340 "params": { 00:32:35.340 "block_size": 512, 00:32:35.340 "num_blocks": 1048576, 00:32:35.340 "name": "malloc1" 00:32:35.340 }, 00:32:35.340 "method": "bdev_malloc_create" 00:32:35.340 }, 00:32:35.340 { 00:32:35.340 "method": "bdev_wait_for_examine" 00:32:35.340 } 00:32:35.340 ] 00:32:35.340 } 00:32:35.340 ] 00:32:35.340 } 00:32:35.340 [2024-04-24 00:46:29.073561] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:35.340 [2024-04-24 00:46:29.073940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145069 ] 00:32:35.598 [2024-04-24 00:46:29.247901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.855 [2024-04-24 00:46:29.594503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.584  Copying: 190/512 [MB] (190 MBps) Copying: 379/512 [MB] (189 MBps) Copying: 512/512 [MB] (average 190 MBps) 00:32:45.584 00:32:45.584 00:46:38 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:32:45.584 00:46:38 -- dd/malloc.sh@33 -- # gen_conf 00:32:45.584 00:46:38 -- dd/common.sh@31 -- # xtrace_disable 00:32:45.584 00:46:38 -- common/autotest_common.sh@10 -- # set +x 00:32:45.584 { 00:32:45.584 "subsystems": [ 00:32:45.584 { 00:32:45.584 "subsystem": "bdev", 00:32:45.584 "config": [ 00:32:45.584 { 00:32:45.584 "params": { 00:32:45.584 "block_size": 512, 00:32:45.584 "num_blocks": 1048576, 00:32:45.584 "name": "malloc0" 00:32:45.584 }, 00:32:45.584 "method": "bdev_malloc_create" 00:32:45.584 }, 00:32:45.584 { 00:32:45.584 "params": { 00:32:45.584 "block_size": 512, 00:32:45.584 "num_blocks": 1048576, 00:32:45.584 "name": "malloc1" 00:32:45.584 }, 00:32:45.584 "method": "bdev_malloc_create" 00:32:45.584 }, 00:32:45.584 { 00:32:45.584 "method": "bdev_wait_for_examine" 00:32:45.584 } 00:32:45.584 ] 00:32:45.584 } 00:32:45.584 ] 00:32:45.584 } 00:32:45.584 [2024-04-24 00:46:38.591249] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:45.584 [2024-04-24 00:46:38.592225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145183 ] 00:32:45.584 [2024-04-24 00:46:38.764371] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.585 [2024-04-24 00:46:38.993808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.875  Copying: 193/512 [MB] (193 MBps) Copying: 380/512 [MB] (187 MBps) Copying: 512/512 [MB] (average 189 MBps) 00:32:54.875 00:32:54.875 00:32:54.875 real 0m18.862s 00:32:54.876 user 0m17.633s 00:32:54.876 sys 0m1.060s 00:32:54.876 00:46:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:54.876 ************************************ 00:32:54.876 END TEST dd_malloc_copy 00:32:54.876 ************************************ 00:32:54.876 00:46:47 -- common/autotest_common.sh@10 -- # set +x 00:32:54.876 ************************************ 00:32:54.876 END TEST spdk_dd_malloc 00:32:54.876 ************************************ 00:32:54.876 00:32:54.876 real 0m19.050s 00:32:54.876 user 0m17.731s 00:32:54.876 sys 0m1.155s 00:32:54.876 00:46:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:54.876 00:46:47 -- common/autotest_common.sh@10 -- # set +x 00:32:54.876 00:46:47 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:32:54.876 00:46:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:54.876 00:46:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:54.876 00:46:47 -- common/autotest_common.sh@10 -- # set +x 00:32:54.876 ************************************ 00:32:54.876 START TEST spdk_dd_bdev_to_bdev 00:32:54.876 ************************************ 00:32:54.876 00:46:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:32:54.876 * Looking for test storage... 00:32:54.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:32:54.876 00:46:48 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:54.876 00:46:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.876 00:46:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.876 00:46:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.876 00:46:48 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:54.876 00:46:48 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:54.876 00:46:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:54.876 00:46:48 -- paths/export.sh@5 -- # export PATH 00:32:54.876 00:46:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:32:54.876 00:46:48 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:32:54.876 [2024-04-24 00:46:48.195018] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:54.876 [2024-04-24 00:46:48.195469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145367 ] 00:32:54.876 [2024-04-24 00:46:48.375351] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.876 [2024-04-24 00:46:48.662317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.791  Copying: 256/256 [MB] (average 1098 MBps) 00:32:56.791 00:32:57.048 00:46:50 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:32:57.048 00:46:50 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:32:57.048 00:46:50 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:32:57.048 00:46:50 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:32:57.048 00:46:50 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:32:57.048 00:46:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:32:57.048 00:46:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:57.048 00:46:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.048 ************************************ 00:32:57.048 START TEST dd_inflate_file 00:32:57.048 ************************************ 00:32:57.048 00:46:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:32:57.048 [2024-04-24 00:46:50.761042] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:57.048 [2024-04-24 00:46:50.761553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145404 ] 00:32:57.305 [2024-04-24 00:46:50.939798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.562 [2024-04-24 00:46:51.163345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.216  Copying: 64/64 [MB] (average 941 MBps) 00:32:59.216 00:32:59.216 ************************************ 00:32:59.216 END TEST dd_inflate_file 00:32:59.216 ************************************ 00:32:59.216 00:32:59.216 real 0m2.318s 00:32:59.216 user 0m1.883s 00:32:59.216 sys 0m0.308s 00:32:59.216 00:46:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:59.216 00:46:52 -- common/autotest_common.sh@10 -- # set +x 00:32:59.473 00:46:53 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:32:59.473 00:46:53 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:32:59.473 00:46:53 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:32:59.473 00:46:53 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:32:59.473 00:46:53 -- dd/common.sh@31 -- # xtrace_disable 00:32:59.473 00:46:53 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:32:59.473 00:46:53 -- common/autotest_common.sh@10 -- # set +x 00:32:59.473 00:46:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:59.473 00:46:53 -- common/autotest_common.sh@10 -- # set +x 00:32:59.473 ************************************ 00:32:59.473 START TEST dd_copy_to_out_bdev 00:32:59.473 ************************************ 00:32:59.473 00:46:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:32:59.473 { 00:32:59.473 "subsystems": [ 00:32:59.473 { 00:32:59.473 "subsystem": "bdev", 00:32:59.473 "config": [ 00:32:59.473 { 00:32:59.473 "params": { 00:32:59.473 "block_size": 4096, 00:32:59.473 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:32:59.473 "name": "aio1" 00:32:59.473 }, 00:32:59.473 "method": "bdev_aio_create" 00:32:59.473 }, 00:32:59.473 { 00:32:59.473 "params": { 00:32:59.473 "trtype": "pcie", 00:32:59.473 "traddr": "0000:00:10.0", 00:32:59.473 "name": "Nvme0" 00:32:59.473 }, 00:32:59.473 "method": "bdev_nvme_attach_controller" 00:32:59.473 }, 00:32:59.473 { 00:32:59.473 "method": "bdev_wait_for_examine" 00:32:59.473 } 00:32:59.473 ] 00:32:59.473 } 00:32:59.473 ] 00:32:59.473 } 00:32:59.473 [2024-04-24 00:46:53.142291] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:32:59.473 [2024-04-24 00:46:53.142965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145467 ] 00:32:59.731 [2024-04-24 00:46:53.333631] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.001 [2024-04-24 00:46:53.618865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.287  Copying: 64/64 [MB] (average 65 MBps) 00:33:03.287 00:33:03.287 00:33:03.287 real 0m3.608s 00:33:03.287 user 0m3.214s 00:33:03.287 sys 0m0.288s 00:33:03.287 00:46:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:03.287 00:46:56 -- common/autotest_common.sh@10 -- # set +x 00:33:03.287 ************************************ 00:33:03.287 END TEST dd_copy_to_out_bdev 00:33:03.287 ************************************ 00:33:03.287 00:46:56 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:33:03.287 00:46:56 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:33:03.287 00:46:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:03.287 00:46:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:03.287 00:46:56 -- common/autotest_common.sh@10 -- # set +x 00:33:03.287 ************************************ 00:33:03.287 START TEST dd_offset_magic 00:33:03.287 ************************************ 00:33:03.287 00:46:56 -- common/autotest_common.sh@1111 -- # offset_magic 00:33:03.287 00:46:56 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:33:03.287 00:46:56 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:33:03.287 00:46:56 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:33:03.287 00:46:56 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:33:03.287 00:46:56 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:33:03.287 00:46:56 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:33:03.287 00:46:56 -- dd/common.sh@31 -- # xtrace_disable 00:33:03.287 00:46:56 -- common/autotest_common.sh@10 -- # set +x 00:33:03.287 { 00:33:03.287 "subsystems": [ 00:33:03.287 { 00:33:03.287 "subsystem": "bdev", 00:33:03.287 "config": [ 00:33:03.287 { 00:33:03.287 "params": { 00:33:03.287 "block_size": 4096, 00:33:03.287 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:33:03.287 "name": "aio1" 00:33:03.287 }, 00:33:03.287 "method": "bdev_aio_create" 00:33:03.287 }, 00:33:03.287 { 00:33:03.287 "params": { 00:33:03.287 "trtype": "pcie", 00:33:03.287 "traddr": "0000:00:10.0", 00:33:03.287 "name": "Nvme0" 00:33:03.287 }, 00:33:03.287 "method": "bdev_nvme_attach_controller" 00:33:03.287 }, 00:33:03.287 { 00:33:03.287 "method": "bdev_wait_for_examine" 00:33:03.287 } 00:33:03.287 ] 00:33:03.287 } 00:33:03.287 ] 00:33:03.287 } 00:33:03.287 [2024-04-24 00:46:56.843854] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:03.287 [2024-04-24 00:46:56.844070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145542 ] 00:33:03.288 [2024-04-24 00:46:57.025123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.545 [2024-04-24 00:46:57.259828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.851  Copying: 65/65 [MB] (average 213 MBps) 00:33:05.851 00:33:05.851 00:46:59 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:33:05.851 00:46:59 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:33:05.851 00:46:59 -- dd/common.sh@31 -- # xtrace_disable 00:33:05.851 00:46:59 -- common/autotest_common.sh@10 -- # set +x 00:33:05.851 { 00:33:05.851 "subsystems": [ 00:33:05.851 { 00:33:05.851 "subsystem": "bdev", 00:33:05.851 "config": [ 00:33:05.851 { 00:33:05.851 "params": { 00:33:05.851 "block_size": 4096, 00:33:05.851 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:33:05.851 "name": "aio1" 00:33:05.851 }, 00:33:05.851 "method": "bdev_aio_create" 00:33:05.851 }, 00:33:05.851 { 00:33:05.851 "params": { 00:33:05.851 "trtype": "pcie", 00:33:05.851 "traddr": "0000:00:10.0", 00:33:05.851 "name": "Nvme0" 00:33:05.851 }, 00:33:05.851 "method": "bdev_nvme_attach_controller" 00:33:05.851 }, 00:33:05.852 { 00:33:05.852 "method": "bdev_wait_for_examine" 00:33:05.852 } 00:33:05.852 ] 00:33:05.852 } 00:33:05.852 ] 00:33:05.852 } 00:33:06.109 [2024-04-24 00:46:59.651396] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:06.109 [2024-04-24 00:46:59.651630] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145587 ] 00:33:06.109 [2024-04-24 00:46:59.846358] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.366 [2024-04-24 00:47:00.144242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.357  Copying: 1024/1024 [kB] (average 500 MBps) 00:33:08.357 00:33:08.357 00:47:02 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:33:08.357 00:47:02 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:33:08.357 00:47:02 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:33:08.357 00:47:02 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:33:08.357 00:47:02 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:33:08.357 00:47:02 -- dd/common.sh@31 -- # xtrace_disable 00:33:08.357 00:47:02 -- common/autotest_common.sh@10 -- # set +x 00:33:08.616 { 00:33:08.616 "subsystems": [ 00:33:08.616 { 00:33:08.616 "subsystem": "bdev", 00:33:08.616 "config": [ 00:33:08.616 { 00:33:08.616 "params": { 00:33:08.616 "block_size": 4096, 00:33:08.616 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:33:08.616 "name": "aio1" 00:33:08.616 }, 00:33:08.616 "method": "bdev_aio_create" 00:33:08.616 }, 00:33:08.616 { 00:33:08.616 "params": { 00:33:08.616 "trtype": "pcie", 00:33:08.616 "traddr": "0000:00:10.0", 00:33:08.616 "name": "Nvme0" 00:33:08.616 }, 00:33:08.616 "method": "bdev_nvme_attach_controller" 00:33:08.616 }, 00:33:08.616 { 00:33:08.616 "method": "bdev_wait_for_examine" 00:33:08.616 } 00:33:08.616 ] 00:33:08.616 } 00:33:08.616 ] 00:33:08.616 } 00:33:08.616 [2024-04-24 00:47:02.177962] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:08.616 [2024-04-24 00:47:02.178123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145621 ] 00:33:08.616 [2024-04-24 00:47:02.343148] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.875 [2024-04-24 00:47:02.577609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.179  Copying: 65/65 [MB] (average 256 MBps) 00:33:11.179 00:33:11.179 00:47:04 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:33:11.179 00:47:04 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:33:11.179 00:47:04 -- dd/common.sh@31 -- # xtrace_disable 00:33:11.179 00:47:04 -- common/autotest_common.sh@10 -- # set +x 00:33:11.179 { 00:33:11.179 "subsystems": [ 00:33:11.179 { 00:33:11.179 "subsystem": "bdev", 00:33:11.179 "config": [ 00:33:11.179 { 00:33:11.179 "params": { 00:33:11.179 "block_size": 4096, 00:33:11.179 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:33:11.179 "name": "aio1" 00:33:11.179 }, 00:33:11.179 "method": "bdev_aio_create" 00:33:11.179 }, 00:33:11.179 { 00:33:11.179 "params": { 00:33:11.179 "trtype": "pcie", 00:33:11.179 "traddr": "0000:00:10.0", 00:33:11.179 "name": "Nvme0" 00:33:11.179 }, 00:33:11.179 "method": "bdev_nvme_attach_controller" 00:33:11.179 }, 00:33:11.179 { 00:33:11.179 "method": "bdev_wait_for_examine" 00:33:11.179 } 00:33:11.179 ] 00:33:11.179 } 00:33:11.179 ] 00:33:11.179 } 00:33:11.179 [2024-04-24 00:47:04.885807] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:11.179 [2024-04-24 00:47:04.886136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145662 ] 00:33:11.438 [2024-04-24 00:47:05.073114] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.697 [2024-04-24 00:47:05.299070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.685  Copying: 1024/1024 [kB] (average 500 MBps) 00:33:13.685 00:33:13.685 00:47:07 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:33:13.685 00:47:07 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:33:13.685 00:33:13.685 real 0m10.443s 00:33:13.685 user 0m8.309s 00:33:13.685 sys 0m1.257s 00:33:13.685 00:47:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:13.685 00:47:07 -- common/autotest_common.sh@10 -- # set +x 00:33:13.685 ************************************ 00:33:13.685 END TEST dd_offset_magic 00:33:13.685 ************************************ 00:33:13.685 00:47:07 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:33:13.685 00:47:07 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:33:13.685 00:47:07 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:33:13.685 00:47:07 -- dd/common.sh@11 -- # local nvme_ref= 00:33:13.685 00:47:07 -- dd/common.sh@12 -- # local size=4194330 00:33:13.685 00:47:07 -- dd/common.sh@14 -- # local bs=1048576 00:33:13.685 00:47:07 -- dd/common.sh@15 -- # local count=5 00:33:13.685 00:47:07 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:33:13.685 00:47:07 -- dd/common.sh@18 -- # gen_conf 00:33:13.685 00:47:07 -- dd/common.sh@31 -- # xtrace_disable 00:33:13.685 00:47:07 -- common/autotest_common.sh@10 -- # set +x 00:33:13.685 { 00:33:13.685 "subsystems": [ 00:33:13.685 { 00:33:13.685 "subsystem": "bdev", 00:33:13.685 "config": [ 00:33:13.685 { 00:33:13.685 "params": { 00:33:13.685 "block_size": 4096, 00:33:13.685 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:33:13.685 "name": "aio1" 00:33:13.685 }, 00:33:13.685 "method": "bdev_aio_create" 00:33:13.685 }, 00:33:13.685 { 00:33:13.685 "params": { 00:33:13.685 "trtype": "pcie", 00:33:13.685 "traddr": "0000:00:10.0", 00:33:13.685 "name": "Nvme0" 00:33:13.685 }, 00:33:13.685 "method": "bdev_nvme_attach_controller" 00:33:13.685 }, 00:33:13.685 { 00:33:13.685 "method": "bdev_wait_for_examine" 00:33:13.685 } 00:33:13.685 ] 00:33:13.685 } 00:33:13.685 ] 00:33:13.685 } 00:33:13.685 [2024-04-24 00:47:07.327296] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:13.685 [2024-04-24 00:47:07.327505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145707 ] 00:33:13.943 [2024-04-24 00:47:07.502594] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.201 [2024-04-24 00:47:07.765291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.141  Copying: 5120/5120 [kB] (average 1250 MBps) 00:33:16.141 00:33:16.141 00:47:09 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:33:16.141 00:47:09 -- dd/common.sh@10 -- # local bdev=aio1 00:33:16.141 00:47:09 -- dd/common.sh@11 -- # local nvme_ref= 00:33:16.141 00:47:09 -- dd/common.sh@12 -- # local size=4194330 00:33:16.141 00:47:09 -- dd/common.sh@14 -- # local bs=1048576 00:33:16.141 00:47:09 -- dd/common.sh@15 -- # local count=5 00:33:16.141 00:47:09 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:33:16.141 00:47:09 -- dd/common.sh@18 -- # gen_conf 00:33:16.141 00:47:09 -- dd/common.sh@31 -- # xtrace_disable 00:33:16.141 00:47:09 -- common/autotest_common.sh@10 -- # set +x 00:33:16.141 { 00:33:16.141 "subsystems": [ 00:33:16.141 { 00:33:16.141 "subsystem": "bdev", 00:33:16.141 "config": [ 00:33:16.141 { 00:33:16.141 "params": { 00:33:16.141 "block_size": 4096, 00:33:16.141 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:33:16.141 "name": "aio1" 00:33:16.141 }, 00:33:16.141 "method": "bdev_aio_create" 00:33:16.141 }, 00:33:16.141 { 00:33:16.141 "params": { 00:33:16.141 "trtype": "pcie", 00:33:16.141 "traddr": "0000:00:10.0", 00:33:16.141 "name": "Nvme0" 00:33:16.141 }, 00:33:16.141 "method": "bdev_nvme_attach_controller" 00:33:16.141 }, 00:33:16.141 { 00:33:16.141 "method": "bdev_wait_for_examine" 00:33:16.141 } 00:33:16.141 ] 00:33:16.141 } 00:33:16.141 ] 00:33:16.141 } 00:33:16.141 [2024-04-24 00:47:09.755806] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:16.141 [2024-04-24 00:47:09.756080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145750 ] 00:33:16.141 [2024-04-24 00:47:09.928618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.399 [2024-04-24 00:47:10.190818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.356  Copying: 5120/5120 [kB] (average 294 MBps) 00:33:18.356 00:33:18.614 00:47:12 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:33:18.614 ************************************ 00:33:18.614 END TEST spdk_dd_bdev_to_bdev 00:33:18.614 ************************************ 00:33:18.614 00:33:18.614 real 0m24.209s 00:33:18.614 user 0m19.586s 00:33:18.614 sys 0m3.148s 00:33:18.614 00:47:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:18.614 00:47:12 -- common/autotest_common.sh@10 -- # set +x 00:33:18.614 00:47:12 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:33:18.614 00:47:12 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:33:18.614 00:47:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:18.614 00:47:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:18.614 00:47:12 -- common/autotest_common.sh@10 -- # set +x 00:33:18.614 ************************************ 00:33:18.614 START TEST spdk_dd_sparse 00:33:18.614 ************************************ 00:33:18.614 00:47:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:33:18.614 * Looking for test storage... 00:33:18.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:33:18.614 00:47:12 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:18.614 00:47:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.614 00:47:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.614 00:47:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.614 00:47:12 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:18.614 00:47:12 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:18.614 00:47:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:18.614 00:47:12 -- paths/export.sh@5 -- # export PATH 00:33:18.615 00:47:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:18.615 00:47:12 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:33:18.615 00:47:12 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:33:18.615 00:47:12 -- dd/sparse.sh@110 -- # file1=file_zero1 00:33:18.615 00:47:12 -- dd/sparse.sh@111 -- # file2=file_zero2 00:33:18.615 00:47:12 -- dd/sparse.sh@112 -- # file3=file_zero3 00:33:18.615 00:47:12 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:33:18.615 00:47:12 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:33:18.615 00:47:12 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:33:18.615 00:47:12 -- dd/sparse.sh@118 -- # prepare 00:33:18.615 00:47:12 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:33:18.615 00:47:12 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:33:18.873 1+0 records in 00:33:18.873 1+0 records out 00:33:18.873 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00921652 s, 455 MB/s 00:33:18.873 00:47:12 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:33:18.873 1+0 records in 00:33:18.873 1+0 records out 00:33:18.873 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00845759 s, 496 MB/s 00:33:18.873 00:47:12 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:33:18.873 1+0 records in 00:33:18.873 1+0 records out 00:33:18.873 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0101407 s, 414 MB/s 00:33:18.873 00:47:12 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:33:18.873 00:47:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:18.873 00:47:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:18.873 00:47:12 -- common/autotest_common.sh@10 -- # set +x 00:33:18.873 ************************************ 00:33:18.873 START TEST dd_sparse_file_to_file 00:33:18.873 ************************************ 00:33:18.873 00:47:12 -- common/autotest_common.sh@1111 -- # file_to_file 00:33:18.873 00:47:12 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:33:18.873 00:47:12 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:33:18.873 00:47:12 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:33:18.873 00:47:12 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:33:18.873 00:47:12 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:33:18.873 00:47:12 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:33:18.873 00:47:12 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:33:18.873 00:47:12 -- dd/sparse.sh@41 -- # gen_conf 00:33:18.873 00:47:12 -- dd/common.sh@31 -- # xtrace_disable 00:33:18.873 00:47:12 -- common/autotest_common.sh@10 -- # set +x 00:33:18.873 { 00:33:18.873 "subsystems": [ 00:33:18.873 { 00:33:18.873 "subsystem": "bdev", 00:33:18.873 "config": [ 00:33:18.873 { 00:33:18.873 "params": { 00:33:18.873 "block_size": 4096, 00:33:18.873 "filename": "dd_sparse_aio_disk", 00:33:18.873 "name": "dd_aio" 00:33:18.873 }, 00:33:18.873 "method": "bdev_aio_create" 00:33:18.873 }, 00:33:18.873 { 00:33:18.873 "params": { 00:33:18.873 "lvs_name": "dd_lvstore", 00:33:18.873 "bdev_name": "dd_aio" 00:33:18.873 }, 00:33:18.873 "method": "bdev_lvol_create_lvstore" 00:33:18.873 }, 00:33:18.873 { 00:33:18.873 "method": "bdev_wait_for_examine" 00:33:18.873 } 00:33:18.873 ] 00:33:18.873 } 00:33:18.873 ] 00:33:18.873 } 00:33:18.873 [2024-04-24 00:47:12.564441] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:18.874 [2024-04-24 00:47:12.564843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145847 ] 00:33:19.131 [2024-04-24 00:47:12.733781] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.389 [2024-04-24 00:47:13.030370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.852  Copying: 12/36 [MB] (average 750 MBps) 00:33:21.852 00:33:21.852 00:47:15 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:33:21.852 00:47:15 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:33:21.852 00:47:15 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:33:21.852 00:47:15 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:33:21.852 00:47:15 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:33:21.852 00:47:15 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:33:21.852 00:47:15 -- dd/sparse.sh@52 -- # stat1_b=24576 00:33:21.852 00:47:15 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:33:21.852 00:47:15 -- dd/sparse.sh@53 -- # stat2_b=24576 00:33:21.852 00:47:15 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:33:21.852 00:33:21.852 real 0m2.696s 00:33:21.852 user 0m2.293s 00:33:21.852 sys 0m0.266s 00:33:21.852 00:47:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:21.852 ************************************ 00:33:21.852 END TEST dd_sparse_file_to_file 00:33:21.852 ************************************ 00:33:21.852 00:47:15 -- common/autotest_common.sh@10 -- # set +x 00:33:21.852 00:47:15 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:33:21.852 00:47:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:21.852 00:47:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:21.852 00:47:15 -- common/autotest_common.sh@10 -- # set +x 00:33:21.852 ************************************ 00:33:21.852 START TEST dd_sparse_file_to_bdev 00:33:21.852 ************************************ 00:33:21.852 00:47:15 -- common/autotest_common.sh@1111 -- # file_to_bdev 00:33:21.852 00:47:15 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:33:21.852 00:47:15 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:33:21.852 00:47:15 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:33:21.852 00:47:15 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:33:21.852 00:47:15 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:33:21.852 00:47:15 -- dd/sparse.sh@73 -- # gen_conf 00:33:21.852 00:47:15 -- dd/common.sh@31 -- # xtrace_disable 00:33:21.852 00:47:15 -- common/autotest_common.sh@10 -- # set +x 00:33:21.852 { 00:33:21.852 "subsystems": [ 00:33:21.852 { 00:33:21.852 "subsystem": "bdev", 00:33:21.852 "config": [ 00:33:21.852 { 00:33:21.852 "params": { 00:33:21.852 "block_size": 4096, 00:33:21.852 "filename": "dd_sparse_aio_disk", 00:33:21.852 "name": "dd_aio" 00:33:21.852 }, 00:33:21.852 "method": "bdev_aio_create" 00:33:21.852 }, 00:33:21.852 { 00:33:21.852 "params": { 00:33:21.853 "lvs_name": "dd_lvstore", 00:33:21.853 "lvol_name": "dd_lvol", 00:33:21.853 "size": 37748736, 00:33:21.853 "thin_provision": true 00:33:21.853 }, 00:33:21.853 "method": "bdev_lvol_create" 00:33:21.853 }, 00:33:21.853 { 00:33:21.853 "method": "bdev_wait_for_examine" 00:33:21.853 } 00:33:21.853 ] 00:33:21.853 } 00:33:21.853 ] 00:33:21.853 } 00:33:21.853 [2024-04-24 00:47:15.354057] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:21.853 [2024-04-24 00:47:15.354791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145923 ] 00:33:21.853 [2024-04-24 00:47:15.517165] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.112 [2024-04-24 00:47:15.756369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.370 [2024-04-24 00:47:16.152325] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:33:22.627  Copying: 12/36 [MB] (average 461 MBps)[2024-04-24 00:47:16.221752] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:33:23.998 00:33:23.998 00:33:23.998 ************************************ 00:33:23.998 END TEST dd_sparse_file_to_bdev 00:33:23.998 ************************************ 00:33:23.998 00:33:23.998 real 0m2.480s 00:33:23.998 user 0m2.142s 00:33:23.998 sys 0m0.247s 00:33:23.998 00:47:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:23.998 00:47:17 -- common/autotest_common.sh@10 -- # set +x 00:33:24.259 00:47:17 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:33:24.259 00:47:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:24.259 00:47:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:24.259 00:47:17 -- common/autotest_common.sh@10 -- # set +x 00:33:24.259 ************************************ 00:33:24.259 START TEST dd_sparse_bdev_to_file 00:33:24.259 ************************************ 00:33:24.259 00:47:17 -- common/autotest_common.sh@1111 -- # bdev_to_file 00:33:24.259 00:47:17 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:33:24.259 00:47:17 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:33:24.259 00:47:17 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:33:24.259 00:47:17 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:33:24.259 00:47:17 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:33:24.259 00:47:17 -- dd/sparse.sh@91 -- # gen_conf 00:33:24.259 00:47:17 -- dd/common.sh@31 -- # xtrace_disable 00:33:24.259 00:47:17 -- common/autotest_common.sh@10 -- # set +x 00:33:24.259 { 00:33:24.259 "subsystems": [ 00:33:24.259 { 00:33:24.259 "subsystem": "bdev", 00:33:24.259 "config": [ 00:33:24.259 { 00:33:24.259 "params": { 00:33:24.259 "block_size": 4096, 00:33:24.259 "filename": "dd_sparse_aio_disk", 00:33:24.259 "name": "dd_aio" 00:33:24.259 }, 00:33:24.259 "method": "bdev_aio_create" 00:33:24.259 }, 00:33:24.259 { 00:33:24.259 "method": "bdev_wait_for_examine" 00:33:24.259 } 00:33:24.259 ] 00:33:24.259 } 00:33:24.259 ] 00:33:24.259 } 00:33:24.259 [2024-04-24 00:47:17.917157] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:24.259 [2024-04-24 00:47:17.917405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145992 ] 00:33:24.518 [2024-04-24 00:47:18.089866] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.777 [2024-04-24 00:47:18.385679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.758  Copying: 12/36 [MB] (average 857 MBps) 00:33:26.758 00:33:26.758 00:47:20 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:33:26.758 00:47:20 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:33:26.758 00:47:20 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:33:26.758 00:47:20 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:33:26.758 00:47:20 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:33:26.758 00:47:20 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:33:26.758 00:47:20 -- dd/sparse.sh@102 -- # stat2_b=24576 00:33:26.758 00:47:20 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:33:26.758 00:47:20 -- dd/sparse.sh@103 -- # stat3_b=24576 00:33:26.758 00:47:20 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:33:26.758 00:33:26.758 real 0m2.576s 00:33:26.758 user 0m2.213s 00:33:26.758 sys 0m0.264s 00:33:26.758 00:47:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:26.758 00:47:20 -- common/autotest_common.sh@10 -- # set +x 00:33:26.758 ************************************ 00:33:26.758 END TEST dd_sparse_bdev_to_file 00:33:26.758 ************************************ 00:33:26.758 00:47:20 -- dd/sparse.sh@1 -- # cleanup 00:33:26.758 00:47:20 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:33:26.758 00:47:20 -- dd/sparse.sh@12 -- # rm file_zero1 00:33:26.758 00:47:20 -- dd/sparse.sh@13 -- # rm file_zero2 00:33:26.758 00:47:20 -- dd/sparse.sh@14 -- # rm file_zero3 00:33:26.758 00:33:26.758 real 0m8.177s 00:33:26.758 user 0m6.837s 00:33:26.758 sys 0m1.022s 00:33:26.758 00:47:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:26.758 00:47:20 -- common/autotest_common.sh@10 -- # set +x 00:33:26.758 ************************************ 00:33:26.758 END TEST spdk_dd_sparse 00:33:26.758 ************************************ 00:33:26.758 00:47:20 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:33:26.758 00:47:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:26.758 00:47:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:26.758 00:47:20 -- common/autotest_common.sh@10 -- # set +x 00:33:27.031 ************************************ 00:33:27.031 START TEST spdk_dd_negative 00:33:27.031 ************************************ 00:33:27.031 00:47:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:33:27.031 * Looking for test storage... 00:33:27.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:33:27.031 00:47:20 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:27.031 00:47:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.031 00:47:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.031 00:47:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.031 00:47:20 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:27.031 00:47:20 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:27.031 00:47:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:27.031 00:47:20 -- paths/export.sh@5 -- # export PATH 00:33:27.031 00:47:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:27.031 00:47:20 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:27.031 00:47:20 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:27.031 00:47:20 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:27.031 00:47:20 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:33:27.031 00:47:20 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:33:27.031 00:47:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:27.031 00:47:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:27.031 00:47:20 -- common/autotest_common.sh@10 -- # set +x 00:33:27.031 ************************************ 00:33:27.031 START TEST dd_invalid_arguments 00:33:27.031 ************************************ 00:33:27.031 00:47:20 -- common/autotest_common.sh@1111 -- # invalid_arguments 00:33:27.031 00:47:20 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:33:27.031 00:47:20 -- common/autotest_common.sh@638 -- # local es=0 00:33:27.031 00:47:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:33:27.031 00:47:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.031 00:47:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:27.031 00:47:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.031 00:47:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:27.031 00:47:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.031 00:47:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:27.031 00:47:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.031 00:47:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:27.031 00:47:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:33:27.031 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:33:27.031 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:33:27.031 00:33:27.031 CPU options: 00:33:27.031 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:33:27.031 (like [0,1,10]) 00:33:27.031 --lcores lcore to CPU mapping list. The list is in the format: 00:33:27.031 [<,lcores[@CPUs]>...] 00:33:27.031 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:33:27.031 Within the group, '-' is used for range separator, 00:33:27.031 ',' is used for single number separator. 00:33:27.031 '( )' can be omitted for single element group, 00:33:27.031 '@' can be omitted if cpus and lcores have the same value 00:33:27.031 --disable-cpumask-locks Disable CPU core lock files. 00:33:27.031 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:33:27.031 pollers in the app support interrupt mode) 00:33:27.031 -p, --main-core main (primary) core for DPDK 00:33:27.031 00:33:27.031 Configuration options: 00:33:27.031 -c, --config, --json JSON config file 00:33:27.031 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:33:27.032 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:33:27.032 --wait-for-rpc wait for RPCs to initialize subsystems 00:33:27.032 --rpcs-allowed comma-separated list of permitted RPCS 00:33:27.032 --json-ignore-init-errors don't exit on invalid config entry 00:33:27.032 00:33:27.032 Memory options: 00:33:27.032 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:33:27.032 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:33:27.032 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:33:27.032 -R, --huge-unlink unlink huge files after initialization 00:33:27.032 -n, --mem-channels number of memory channels used for DPDK 00:33:27.032 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:33:27.032 --msg-mempool-size global message memory pool size in count (default: 262143) 00:33:27.032 --no-huge run without using hugepages 00:33:27.032 -i, --shm-id shared memory ID (optional) 00:33:27.032 -g, --single-file-segments force creating just one hugetlbfs file 00:33:27.032 00:33:27.032 PCI options: 00:33:27.032 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:33:27.032 -B, --pci-blocked pci addr to block (can be used more than once) 00:33:27.032 -u, --no-pci disable PCI access 00:33:27.032 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:33:27.032 00:33:27.032 Log options: 00:33:27.032 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:33:27.032 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:33:27.032 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:33:27.032 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:33:27.032 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:33:27.032 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:33:27.032 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:33:27.032 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:33:27.032 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:33:27.032 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:33:27.032 virtio_vfio_user, vmd) 00:33:27.032 --silence-noticelog disable notice level logging to stderr 00:33:27.032 00:33:27.032 Trace options: 00:33:27.032 --num-trace-entries number of trace entries for each core, must be power of 2, 00:33:27.032 setting 0 to disable trace (default 32768) 00:33:27.032 Tracepoints vary in size and can use more than one trace entry. 00:33:27.032 -e, --tpoint-group [:] 00:33:27.032 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:33:27.032 [2024-04-24 00:47:20.786675] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:33:27.032 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:33:27.032 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:33:27.032 a tracepoint group. First tpoint inside a group can be enabled by 00:33:27.032 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:33:27.032 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:33:27.032 in /include/spdk_internal/trace_defs.h 00:33:27.032 00:33:27.032 Other options: 00:33:27.032 -h, --help show this usage 00:33:27.032 -v, --version print SPDK version 00:33:27.032 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:33:27.032 --env-context Opaque context for use of the env implementation 00:33:27.032 00:33:27.032 Application specific: 00:33:27.032 [--------- DD Options ---------] 00:33:27.032 --if Input file. Must specify either --if or --ib. 00:33:27.032 --ib Input bdev. Must specifier either --if or --ib 00:33:27.032 --of Output file. Must specify either --of or --ob. 00:33:27.032 --ob Output bdev. Must specify either --of or --ob. 00:33:27.032 --iflag Input file flags. 00:33:27.032 --oflag Output file flags. 00:33:27.032 --bs I/O unit size (default: 4096) 00:33:27.032 --qd Queue depth (default: 2) 00:33:27.032 --count I/O unit count. The number of I/O units to copy. (default: all) 00:33:27.032 --skip Skip this many I/O units at start of input. (default: 0) 00:33:27.032 --seek Skip this many I/O units at start of output. (default: 0) 00:33:27.032 --aio Force usage of AIO. (by default io_uring is used if available) 00:33:27.032 --sparse Enable hole skipping in input target 00:33:27.032 Available iflag and oflag values: 00:33:27.032 append - append mode 00:33:27.032 direct - use direct I/O for data 00:33:27.032 directory - fail unless a directory 00:33:27.032 dsync - use synchronized I/O for data 00:33:27.032 noatime - do not update access time 00:33:27.032 noctty - do not assign controlling terminal from file 00:33:27.032 nofollow - do not follow symlinks 00:33:27.032 nonblock - use non-blocking I/O 00:33:27.032 sync - use synchronized I/O for data and metadata 00:33:27.290 00:47:20 -- common/autotest_common.sh@641 -- # es=2 00:33:27.290 00:47:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:27.290 00:47:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:27.290 00:47:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:27.290 00:33:27.290 real 0m0.118s 00:33:27.290 user 0m0.052s 00:33:27.290 sys 0m0.059s 00:33:27.290 00:47:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:27.290 00:47:20 -- common/autotest_common.sh@10 -- # set +x 00:33:27.290 ************************************ 00:33:27.290 END TEST dd_invalid_arguments 00:33:27.290 ************************************ 00:33:27.290 00:47:20 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:33:27.290 00:47:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:27.290 00:47:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:27.290 00:47:20 -- common/autotest_common.sh@10 -- # set +x 00:33:27.290 ************************************ 00:33:27.290 START TEST dd_double_input 00:33:27.290 ************************************ 00:33:27.290 00:47:20 -- common/autotest_common.sh@1111 -- # double_input 00:33:27.290 00:47:20 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:33:27.290 00:47:20 -- common/autotest_common.sh@638 -- # local es=0 00:33:27.290 00:47:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:33:27.290 00:47:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.290 00:47:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:27.290 00:47:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.290 00:47:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:27.290 00:47:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.290 00:47:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:27.290 00:47:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.290 00:47:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:27.290 00:47:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:33:27.290 [2024-04-24 00:47:21.031458] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:33:27.548 00:47:21 -- common/autotest_common.sh@641 -- # es=22 00:33:27.548 00:47:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:27.548 00:47:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:27.548 00:47:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:27.548 00:33:27.548 real 0m0.156s 00:33:27.548 user 0m0.072s 00:33:27.548 sys 0m0.083s 00:33:27.548 00:47:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:27.548 ************************************ 00:33:27.548 00:47:21 -- common/autotest_common.sh@10 -- # set +x 00:33:27.548 END TEST dd_double_input 00:33:27.548 ************************************ 00:33:27.548 00:47:21 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:33:27.548 00:47:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:27.548 00:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:27.548 00:47:21 -- common/autotest_common.sh@10 -- # set +x 00:33:27.548 ************************************ 00:33:27.548 START TEST dd_double_output 00:33:27.548 ************************************ 00:33:27.548 00:47:21 -- common/autotest_common.sh@1111 -- # double_output 00:33:27.548 00:47:21 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:33:27.548 00:47:21 -- common/autotest_common.sh@638 -- # local es=0 00:33:27.548 00:47:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:33:27.548 00:47:21 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.548 00:47:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:27.548 00:47:21 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.548 00:47:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:27.548 00:47:21 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.548 00:47:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:27.548 00:47:21 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.548 00:47:21 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:27.548 00:47:21 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:33:27.548 [2024-04-24 00:47:21.284782] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:33:27.548 00:47:21 -- common/autotest_common.sh@641 -- # es=22 00:33:27.548 00:47:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:27.806 00:47:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:27.806 00:47:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:27.806 00:33:27.806 real 0m0.146s 00:33:27.806 user 0m0.054s 00:33:27.806 sys 0m0.090s 00:33:27.806 00:47:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:27.806 00:47:21 -- common/autotest_common.sh@10 -- # set +x 00:33:27.806 ************************************ 00:33:27.806 END TEST dd_double_output 00:33:27.806 ************************************ 00:33:27.806 00:47:21 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:33:27.806 00:47:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:27.806 00:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:27.806 00:47:21 -- common/autotest_common.sh@10 -- # set +x 00:33:27.806 ************************************ 00:33:27.806 START TEST dd_no_input 00:33:27.806 ************************************ 00:33:27.806 00:47:21 -- common/autotest_common.sh@1111 -- # no_input 00:33:27.806 00:47:21 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:33:27.806 00:47:21 -- common/autotest_common.sh@638 -- # local es=0 00:33:27.806 00:47:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:33:27.806 00:47:21 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.806 00:47:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:27.806 00:47:21 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.807 00:47:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:27.807 00:47:21 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.807 00:47:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:27.807 00:47:21 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:27.807 00:47:21 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:27.807 00:47:21 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:33:27.807 [2024-04-24 00:47:21.530392] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:33:27.807 00:47:21 -- common/autotest_common.sh@641 -- # es=22 00:33:27.807 00:47:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:27.807 00:47:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:27.807 00:47:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:27.807 00:33:27.807 real 0m0.132s 00:33:27.807 user 0m0.045s 00:33:27.807 sys 0m0.086s 00:33:27.807 00:47:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:27.807 00:47:21 -- common/autotest_common.sh@10 -- # set +x 00:33:27.807 ************************************ 00:33:27.807 END TEST dd_no_input 00:33:27.807 ************************************ 00:33:28.065 00:47:21 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:33:28.065 00:47:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:28.065 00:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:28.065 00:47:21 -- common/autotest_common.sh@10 -- # set +x 00:33:28.065 ************************************ 00:33:28.065 START TEST dd_no_output 00:33:28.065 ************************************ 00:33:28.065 00:47:21 -- common/autotest_common.sh@1111 -- # no_output 00:33:28.065 00:47:21 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:28.065 00:47:21 -- common/autotest_common.sh@638 -- # local es=0 00:33:28.065 00:47:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:28.065 00:47:21 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:28.065 00:47:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:28.065 00:47:21 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:28.065 00:47:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:28.065 00:47:21 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:28.065 00:47:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:28.065 00:47:21 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:28.065 00:47:21 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:28.065 00:47:21 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:33:28.065 [2024-04-24 00:47:21.785719] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:33:28.065 00:47:21 -- common/autotest_common.sh@641 -- # es=22 00:33:28.065 00:47:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:28.065 00:47:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:28.065 00:47:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:28.065 00:33:28.065 real 0m0.158s 00:33:28.065 user 0m0.074s 00:33:28.065 sys 0m0.082s 00:33:28.065 00:47:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:28.065 00:47:21 -- common/autotest_common.sh@10 -- # set +x 00:33:28.065 ************************************ 00:33:28.065 END TEST dd_no_output 00:33:28.065 ************************************ 00:33:28.323 00:47:21 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:33:28.323 00:47:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:28.323 00:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:28.323 00:47:21 -- common/autotest_common.sh@10 -- # set +x 00:33:28.323 ************************************ 00:33:28.323 START TEST dd_wrong_blocksize 00:33:28.323 ************************************ 00:33:28.323 00:47:21 -- common/autotest_common.sh@1111 -- # wrong_blocksize 00:33:28.323 00:47:21 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:33:28.323 00:47:21 -- common/autotest_common.sh@638 -- # local es=0 00:33:28.323 00:47:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:33:28.323 00:47:21 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:28.323 00:47:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:28.323 00:47:21 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:28.323 00:47:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:28.323 00:47:21 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:28.323 00:47:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:28.323 00:47:21 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:28.323 00:47:21 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:28.323 00:47:21 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:33:28.323 [2024-04-24 00:47:22.038960] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:33:28.323 00:47:22 -- common/autotest_common.sh@641 -- # es=22 00:33:28.323 00:47:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:28.323 00:47:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:28.323 00:47:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:28.323 00:33:28.323 real 0m0.134s 00:33:28.323 user 0m0.076s 00:33:28.323 sys 0m0.056s 00:33:28.323 00:47:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:28.323 00:47:22 -- common/autotest_common.sh@10 -- # set +x 00:33:28.323 ************************************ 00:33:28.323 END TEST dd_wrong_blocksize 00:33:28.323 ************************************ 00:33:28.582 00:47:22 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:33:28.582 00:47:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:28.582 00:47:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:28.582 00:47:22 -- common/autotest_common.sh@10 -- # set +x 00:33:28.582 ************************************ 00:33:28.582 START TEST dd_smaller_blocksize 00:33:28.582 ************************************ 00:33:28.582 00:47:22 -- common/autotest_common.sh@1111 -- # smaller_blocksize 00:33:28.582 00:47:22 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:33:28.582 00:47:22 -- common/autotest_common.sh@638 -- # local es=0 00:33:28.582 00:47:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:33:28.582 00:47:22 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:28.582 00:47:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:28.582 00:47:22 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:28.582 00:47:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:28.582 00:47:22 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:28.582 00:47:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:28.582 00:47:22 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:28.582 00:47:22 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:28.582 00:47:22 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:33:28.582 [2024-04-24 00:47:22.290199] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:28.582 [2024-04-24 00:47:22.290434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146309 ] 00:33:28.895 [2024-04-24 00:47:22.476306] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.153 [2024-04-24 00:47:22.800743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.088 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:33:30.088 [2024-04-24 00:47:23.755653] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:33:30.088 [2024-04-24 00:47:23.755772] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:31.023 [2024-04-24 00:47:24.679600] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:33:31.591 ************************************ 00:33:31.591 END TEST dd_smaller_blocksize 00:33:31.591 ************************************ 00:33:31.591 00:47:25 -- common/autotest_common.sh@641 -- # es=244 00:33:31.591 00:47:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:31.591 00:47:25 -- common/autotest_common.sh@650 -- # es=116 00:33:31.591 00:47:25 -- common/autotest_common.sh@651 -- # case "$es" in 00:33:31.591 00:47:25 -- common/autotest_common.sh@658 -- # es=1 00:33:31.591 00:47:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:31.591 00:33:31.591 real 0m3.018s 00:33:31.591 user 0m2.141s 00:33:31.591 sys 0m0.774s 00:33:31.591 00:47:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:31.591 00:47:25 -- common/autotest_common.sh@10 -- # set +x 00:33:31.591 00:47:25 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:33:31.591 00:47:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:31.591 00:47:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:31.591 00:47:25 -- common/autotest_common.sh@10 -- # set +x 00:33:31.591 ************************************ 00:33:31.591 START TEST dd_invalid_count 00:33:31.591 ************************************ 00:33:31.591 00:47:25 -- common/autotest_common.sh@1111 -- # invalid_count 00:33:31.591 00:47:25 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:33:31.591 00:47:25 -- common/autotest_common.sh@638 -- # local es=0 00:33:31.591 00:47:25 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:33:31.591 00:47:25 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:31.591 00:47:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:31.591 00:47:25 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:31.591 00:47:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:31.591 00:47:25 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:31.591 00:47:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:31.591 00:47:25 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:31.591 00:47:25 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:31.591 00:47:25 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:33:31.850 [2024-04-24 00:47:25.408127] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:33:31.850 00:47:25 -- common/autotest_common.sh@641 -- # es=22 00:33:31.850 00:47:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:31.850 00:47:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:31.850 00:47:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:31.850 00:33:31.850 real 0m0.141s 00:33:31.850 user 0m0.067s 00:33:31.850 sys 0m0.071s 00:33:31.850 00:47:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:31.850 00:47:25 -- common/autotest_common.sh@10 -- # set +x 00:33:31.850 ************************************ 00:33:31.850 END TEST dd_invalid_count 00:33:31.850 ************************************ 00:33:31.850 00:47:25 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:33:31.850 00:47:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:31.850 00:47:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:31.850 00:47:25 -- common/autotest_common.sh@10 -- # set +x 00:33:31.850 ************************************ 00:33:31.850 START TEST dd_invalid_oflag 00:33:31.850 ************************************ 00:33:31.850 00:47:25 -- common/autotest_common.sh@1111 -- # invalid_oflag 00:33:31.850 00:47:25 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:33:31.850 00:47:25 -- common/autotest_common.sh@638 -- # local es=0 00:33:31.850 00:47:25 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:33:31.850 00:47:25 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:31.850 00:47:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:31.850 00:47:25 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:31.850 00:47:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:31.850 00:47:25 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:31.850 00:47:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:31.850 00:47:25 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:31.850 00:47:25 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:31.850 00:47:25 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:33:32.108 [2024-04-24 00:47:25.662046] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:33:32.108 00:47:25 -- common/autotest_common.sh@641 -- # es=22 00:33:32.108 00:47:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:32.108 00:47:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:32.108 00:47:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:32.108 00:33:32.108 real 0m0.148s 00:33:32.108 user 0m0.076s 00:33:32.108 sys 0m0.071s 00:33:32.108 00:47:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:32.108 00:47:25 -- common/autotest_common.sh@10 -- # set +x 00:33:32.108 ************************************ 00:33:32.108 END TEST dd_invalid_oflag 00:33:32.108 ************************************ 00:33:32.108 00:47:25 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:33:32.108 00:47:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:32.108 00:47:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:32.108 00:47:25 -- common/autotest_common.sh@10 -- # set +x 00:33:32.108 ************************************ 00:33:32.108 START TEST dd_invalid_iflag 00:33:32.108 ************************************ 00:33:32.108 00:47:25 -- common/autotest_common.sh@1111 -- # invalid_iflag 00:33:32.108 00:47:25 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:33:32.108 00:47:25 -- common/autotest_common.sh@638 -- # local es=0 00:33:32.108 00:47:25 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:33:32.108 00:47:25 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:32.108 00:47:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:32.108 00:47:25 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:32.108 00:47:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:32.108 00:47:25 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:32.108 00:47:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:32.108 00:47:25 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:32.108 00:47:25 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:32.108 00:47:25 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:33:32.366 [2024-04-24 00:47:25.921371] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:33:32.366 00:47:25 -- common/autotest_common.sh@641 -- # es=22 00:33:32.366 00:47:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:32.366 00:47:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:32.366 00:47:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:32.366 00:33:32.366 real 0m0.157s 00:33:32.366 user 0m0.096s 00:33:32.366 sys 0m0.059s 00:33:32.366 00:47:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:32.366 00:47:25 -- common/autotest_common.sh@10 -- # set +x 00:33:32.366 ************************************ 00:33:32.366 END TEST dd_invalid_iflag 00:33:32.366 ************************************ 00:33:32.366 00:47:26 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:33:32.366 00:47:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:32.366 00:47:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:32.366 00:47:26 -- common/autotest_common.sh@10 -- # set +x 00:33:32.366 ************************************ 00:33:32.366 START TEST dd_unknown_flag 00:33:32.366 ************************************ 00:33:32.366 00:47:26 -- common/autotest_common.sh@1111 -- # unknown_flag 00:33:32.366 00:47:26 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:33:32.366 00:47:26 -- common/autotest_common.sh@638 -- # local es=0 00:33:32.366 00:47:26 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:33:32.366 00:47:26 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:32.366 00:47:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:32.366 00:47:26 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:32.366 00:47:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:32.366 00:47:26 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:32.366 00:47:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:32.366 00:47:26 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:32.366 00:47:26 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:32.366 00:47:26 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:33:32.642 [2024-04-24 00:47:26.185341] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:32.642 [2024-04-24 00:47:26.185560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146471 ] 00:33:32.642 [2024-04-24 00:47:26.375552] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.913 [2024-04-24 00:47:26.690777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.478 [2024-04-24 00:47:27.094777] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:33:33.478 [2024-04-24 00:47:27.094914] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:33.478  Copying: 0/0 [B] (average 0 Bps)[2024-04-24 00:47:27.095177] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:33:34.410 [2024-04-24 00:47:28.014995] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:33:34.976 00:33:34.976 00:33:34.976 00:47:28 -- common/autotest_common.sh@641 -- # es=234 00:33:34.976 00:47:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:34.976 00:47:28 -- common/autotest_common.sh@650 -- # es=106 00:33:34.976 00:47:28 -- common/autotest_common.sh@651 -- # case "$es" in 00:33:34.976 00:47:28 -- common/autotest_common.sh@658 -- # es=1 00:33:34.976 00:47:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:34.976 00:33:34.976 real 0m2.491s 00:33:34.976 user 0m1.995s 00:33:34.976 sys 0m0.343s 00:33:34.976 00:47:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:34.976 00:47:28 -- common/autotest_common.sh@10 -- # set +x 00:33:34.976 ************************************ 00:33:34.976 END TEST dd_unknown_flag 00:33:34.976 ************************************ 00:33:34.976 00:47:28 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:33:34.976 00:47:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:34.976 00:47:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:34.976 00:47:28 -- common/autotest_common.sh@10 -- # set +x 00:33:34.976 ************************************ 00:33:34.976 START TEST dd_invalid_json 00:33:34.976 ************************************ 00:33:34.976 00:47:28 -- common/autotest_common.sh@1111 -- # invalid_json 00:33:34.976 00:47:28 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:33:34.976 00:47:28 -- dd/negative_dd.sh@95 -- # : 00:33:34.976 00:47:28 -- common/autotest_common.sh@638 -- # local es=0 00:33:34.976 00:47:28 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:33:34.976 00:47:28 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:34.976 00:47:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:34.976 00:47:28 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:34.976 00:47:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:34.976 00:47:28 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:34.976 00:47:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:34.976 00:47:28 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:34.976 00:47:28 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:33:34.976 00:47:28 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:33:35.233 [2024-04-24 00:47:28.801654] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:35.233 [2024-04-24 00:47:28.801900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146530 ] 00:33:35.233 [2024-04-24 00:47:28.989337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.491 [2024-04-24 00:47:29.273144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:35.491 [2024-04-24 00:47:29.273573] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:33:35.491 [2024-04-24 00:47:29.273739] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:35.491 [2024-04-24 00:47:29.273869] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:35.491 [2024-04-24 00:47:29.274026] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:33:36.058 00:47:29 -- common/autotest_common.sh@641 -- # es=234 00:33:36.058 00:47:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:36.058 00:47:29 -- common/autotest_common.sh@650 -- # es=106 00:33:36.058 00:47:29 -- common/autotest_common.sh@651 -- # case "$es" in 00:33:36.058 00:47:29 -- common/autotest_common.sh@658 -- # es=1 00:33:36.058 00:47:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:36.058 00:33:36.058 real 0m1.078s 00:33:36.058 user 0m0.760s 00:33:36.058 sys 0m0.218s 00:33:36.058 00:47:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:36.058 00:47:29 -- common/autotest_common.sh@10 -- # set +x 00:33:36.058 ************************************ 00:33:36.058 END TEST dd_invalid_json 00:33:36.058 ************************************ 00:33:36.058 ************************************ 00:33:36.058 END TEST spdk_dd_negative 00:33:36.058 ************************************ 00:33:36.058 00:33:36.058 real 0m9.272s 00:33:36.058 user 0m6.127s 00:33:36.058 sys 0m2.762s 00:33:36.058 00:47:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:36.058 00:47:29 -- common/autotest_common.sh@10 -- # set +x 00:33:36.315 00:33:36.315 real 3m13.557s 00:33:36.315 user 2m39.164s 00:33:36.315 sys 0m24.504s 00:33:36.315 00:47:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:36.315 00:47:29 -- common/autotest_common.sh@10 -- # set +x 00:33:36.315 ************************************ 00:33:36.315 END TEST spdk_dd 00:33:36.315 ************************************ 00:33:36.315 00:47:29 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:33:36.315 00:47:29 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:33:36.315 00:47:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:33:36.315 00:47:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:36.315 00:47:29 -- common/autotest_common.sh@10 -- # set +x 00:33:36.315 ************************************ 00:33:36.315 START TEST blockdev_nvme 00:33:36.315 ************************************ 00:33:36.315 00:47:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:33:36.315 * Looking for test storage... 00:33:36.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:33:36.315 00:47:30 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:33:36.315 00:47:30 -- bdev/nbd_common.sh@6 -- # set -e 00:33:36.315 00:47:30 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:36.315 00:47:30 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:36.315 00:47:30 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:33:36.315 00:47:30 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:33:36.315 00:47:30 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:33:36.315 00:47:30 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:33:36.315 00:47:30 -- bdev/blockdev.sh@20 -- # : 00:33:36.596 00:47:30 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:33:36.596 00:47:30 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:33:36.596 00:47:30 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:33:36.596 00:47:30 -- bdev/blockdev.sh@674 -- # uname -s 00:33:36.596 00:47:30 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:33:36.596 00:47:30 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:33:36.596 00:47:30 -- bdev/blockdev.sh@682 -- # test_type=nvme 00:33:36.596 00:47:30 -- bdev/blockdev.sh@683 -- # crypto_device= 00:33:36.596 00:47:30 -- bdev/blockdev.sh@684 -- # dek= 00:33:36.596 00:47:30 -- bdev/blockdev.sh@685 -- # env_ctx= 00:33:36.596 00:47:30 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:33:36.596 00:47:30 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:33:36.596 00:47:30 -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:33:36.596 00:47:30 -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:33:36.596 00:47:30 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:33:36.596 00:47:30 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=146638 00:33:36.596 00:47:30 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:33:36.596 00:47:30 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:36.596 00:47:30 -- bdev/blockdev.sh@49 -- # waitforlisten 146638 00:33:36.596 00:47:30 -- common/autotest_common.sh@817 -- # '[' -z 146638 ']' 00:33:36.596 00:47:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.596 00:47:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:36.596 00:47:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.596 00:47:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:36.596 00:47:30 -- common/autotest_common.sh@10 -- # set +x 00:33:36.596 [2024-04-24 00:47:30.211364] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:36.596 [2024-04-24 00:47:30.211562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146638 ] 00:33:36.866 [2024-04-24 00:47:30.380783] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.124 [2024-04-24 00:47:30.670824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.056 00:47:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:38.056 00:47:31 -- common/autotest_common.sh@850 -- # return 0 00:33:38.056 00:47:31 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:33:38.056 00:47:31 -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:33:38.056 00:47:31 -- bdev/blockdev.sh@81 -- # local json 00:33:38.056 00:47:31 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:33:38.057 00:47:31 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:38.314 00:47:31 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:33:38.314 00:47:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:38.314 00:47:31 -- common/autotest_common.sh@10 -- # set +x 00:33:38.314 00:47:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:38.314 00:47:31 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:33:38.314 00:47:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:38.314 00:47:31 -- common/autotest_common.sh@10 -- # set +x 00:33:38.314 00:47:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:38.314 00:47:31 -- bdev/blockdev.sh@740 -- # cat 00:33:38.314 00:47:31 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:33:38.314 00:47:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:38.314 00:47:31 -- common/autotest_common.sh@10 -- # set +x 00:33:38.314 00:47:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:38.314 00:47:31 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:33:38.314 00:47:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:38.314 00:47:31 -- common/autotest_common.sh@10 -- # set +x 00:33:38.314 00:47:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:38.314 00:47:31 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:33:38.314 00:47:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:38.314 00:47:31 -- common/autotest_common.sh@10 -- # set +x 00:33:38.314 00:47:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:38.314 00:47:32 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:33:38.314 00:47:32 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:33:38.314 00:47:32 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:33:38.314 00:47:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:38.314 00:47:32 -- common/autotest_common.sh@10 -- # set +x 00:33:38.314 00:47:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:38.314 00:47:32 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:33:38.314 00:47:32 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "5d467446-e9c9-41de-a302-60079f5bfba2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5d467446-e9c9-41de-a302-60079f5bfba2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:33:38.314 00:47:32 -- bdev/blockdev.sh@749 -- # jq -r .name 00:33:38.572 00:47:32 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:33:38.572 00:47:32 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:33:38.572 00:47:32 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:33:38.572 00:47:32 -- bdev/blockdev.sh@754 -- # killprocess 146638 00:33:38.572 00:47:32 -- common/autotest_common.sh@936 -- # '[' -z 146638 ']' 00:33:38.572 00:47:32 -- common/autotest_common.sh@940 -- # kill -0 146638 00:33:38.572 00:47:32 -- common/autotest_common.sh@941 -- # uname 00:33:38.572 00:47:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:38.572 00:47:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146638 00:33:38.572 00:47:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:38.572 00:47:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:38.572 killing process with pid 146638 00:33:38.573 00:47:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146638' 00:33:38.573 00:47:32 -- common/autotest_common.sh@955 -- # kill 146638 00:33:38.573 00:47:32 -- common/autotest_common.sh@960 -- # wait 146638 00:33:41.861 00:47:35 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:41.861 00:47:35 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:33:41.862 00:47:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:33:41.862 00:47:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:41.862 00:47:35 -- common/autotest_common.sh@10 -- # set +x 00:33:41.862 ************************************ 00:33:41.862 START TEST bdev_hello_world 00:33:41.862 ************************************ 00:33:41.862 00:47:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:33:41.862 [2024-04-24 00:47:35.207069] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:41.862 [2024-04-24 00:47:35.207337] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146743 ] 00:33:41.862 [2024-04-24 00:47:35.392770] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.862 [2024-04-24 00:47:35.629311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.432 [2024-04-24 00:47:36.142041] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:33:42.432 [2024-04-24 00:47:36.142110] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:33:42.432 [2024-04-24 00:47:36.142144] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:33:42.432 [2024-04-24 00:47:36.145561] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:33:42.432 [2024-04-24 00:47:36.146206] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:33:42.432 [2024-04-24 00:47:36.146264] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:33:42.432 [2024-04-24 00:47:36.146566] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:33:42.432 00:33:42.432 [2024-04-24 00:47:36.146608] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:33:44.333 00:33:44.333 real 0m2.602s 00:33:44.333 user 0m2.236s 00:33:44.333 sys 0m0.268s 00:33:44.333 00:47:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:44.333 00:47:37 -- common/autotest_common.sh@10 -- # set +x 00:33:44.333 ************************************ 00:33:44.333 END TEST bdev_hello_world 00:33:44.333 ************************************ 00:33:44.333 00:47:37 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:33:44.333 00:47:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:33:44.333 00:47:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:44.333 00:47:37 -- common/autotest_common.sh@10 -- # set +x 00:33:44.333 ************************************ 00:33:44.333 START TEST bdev_bounds 00:33:44.333 ************************************ 00:33:44.333 00:47:37 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:33:44.333 00:47:37 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:44.333 00:47:37 -- bdev/blockdev.sh@290 -- # bdevio_pid=146800 00:33:44.334 00:47:37 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:33:44.334 00:47:37 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 146800' 00:33:44.334 Process bdevio pid: 146800 00:33:44.334 00:47:37 -- bdev/blockdev.sh@293 -- # waitforlisten 146800 00:33:44.334 00:47:37 -- common/autotest_common.sh@817 -- # '[' -z 146800 ']' 00:33:44.334 00:47:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.334 00:47:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:44.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.334 00:47:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.334 00:47:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:44.334 00:47:37 -- common/autotest_common.sh@10 -- # set +x 00:33:44.334 [2024-04-24 00:47:37.899100] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:44.334 [2024-04-24 00:47:37.899296] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146800 ] 00:33:44.334 [2024-04-24 00:47:38.099365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:44.591 [2024-04-24 00:47:38.363668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.591 [2024-04-24 00:47:38.363788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.591 [2024-04-24 00:47:38.363794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:45.156 00:47:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:45.156 00:47:38 -- common/autotest_common.sh@850 -- # return 0 00:33:45.156 00:47:38 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:33:45.414 I/O targets: 00:33:45.414 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:33:45.414 00:33:45.414 00:33:45.414 CUnit - A unit testing framework for C - Version 2.1-3 00:33:45.414 http://cunit.sourceforge.net/ 00:33:45.414 00:33:45.414 00:33:45.414 Suite: bdevio tests on: Nvme0n1 00:33:45.414 Test: blockdev write read block ...passed 00:33:45.414 Test: blockdev write zeroes read block ...passed 00:33:45.414 Test: blockdev write zeroes read no split ...passed 00:33:45.414 Test: blockdev write zeroes read split ...passed 00:33:45.414 Test: blockdev write zeroes read split partial ...passed 00:33:45.414 Test: blockdev reset ...[2024-04-24 00:47:39.050114] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:33:45.414 [2024-04-24 00:47:39.054698] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:45.414 passed 00:33:45.414 Test: blockdev write read 8 blocks ...passed 00:33:45.414 Test: blockdev write read size > 128k ...passed 00:33:45.414 Test: blockdev write read invalid size ...passed 00:33:45.414 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:45.414 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:45.414 Test: blockdev write read max offset ...passed 00:33:45.414 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:45.414 Test: blockdev writev readv 8 blocks ...passed 00:33:45.414 Test: blockdev writev readv 30 x 1block ...passed 00:33:45.414 Test: blockdev writev readv block ...passed 00:33:45.414 Test: blockdev writev readv size > 128k ...passed 00:33:45.414 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:45.414 Test: blockdev comparev and writev ...[2024-04-24 00:47:39.064132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a60d000 len:0x1000 00:33:45.414 [2024-04-24 00:47:39.064272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:33:45.414 passed 00:33:45.415 Test: blockdev nvme passthru rw ...passed 00:33:45.415 Test: blockdev nvme passthru vendor specific ...[2024-04-24 00:47:39.065263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:33:45.415 [2024-04-24 00:47:39.065320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:33:45.415 passed 00:33:45.415 Test: blockdev nvme admin passthru ...passed 00:33:45.415 Test: blockdev copy ...passed 00:33:45.415 00:33:45.415 Run Summary: Type Total Ran Passed Failed Inactive 00:33:45.415 suites 1 1 n/a 0 0 00:33:45.415 tests 23 23 23 0 0 00:33:45.415 asserts 152 152 152 0 n/a 00:33:45.415 00:33:45.415 Elapsed time = 0.250 seconds 00:33:45.415 0 00:33:45.415 00:47:39 -- bdev/blockdev.sh@295 -- # killprocess 146800 00:33:45.415 00:47:39 -- common/autotest_common.sh@936 -- # '[' -z 146800 ']' 00:33:45.415 00:47:39 -- common/autotest_common.sh@940 -- # kill -0 146800 00:33:45.415 00:47:39 -- common/autotest_common.sh@941 -- # uname 00:33:45.415 00:47:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:45.415 00:47:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146800 00:33:45.415 00:47:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:45.415 killing process with pid 146800 00:33:45.415 00:47:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:45.415 00:47:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146800' 00:33:45.415 00:47:39 -- common/autotest_common.sh@955 -- # kill 146800 00:33:45.415 00:47:39 -- common/autotest_common.sh@960 -- # wait 146800 00:33:46.789 00:47:40 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:33:46.789 00:33:46.789 real 0m2.712s 00:33:46.789 user 0m6.191s 00:33:46.789 sys 0m0.406s 00:33:46.789 00:47:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:46.789 00:47:40 -- common/autotest_common.sh@10 -- # set +x 00:33:46.789 ************************************ 00:33:46.789 END TEST bdev_bounds 00:33:46.789 ************************************ 00:33:46.789 00:47:40 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:33:46.789 00:47:40 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:33:46.789 00:47:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:46.789 00:47:40 -- common/autotest_common.sh@10 -- # set +x 00:33:47.047 ************************************ 00:33:47.047 START TEST bdev_nbd 00:33:47.047 ************************************ 00:33:47.047 00:47:40 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:33:47.047 00:47:40 -- bdev/blockdev.sh@300 -- # uname -s 00:33:47.047 00:47:40 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:33:47.047 00:47:40 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:47.047 00:47:40 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:47.047 00:47:40 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1') 00:33:47.047 00:47:40 -- bdev/blockdev.sh@304 -- # local bdev_all 00:33:47.047 00:47:40 -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:33:47.047 00:47:40 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:33:47.047 00:47:40 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:33:47.047 00:47:40 -- bdev/blockdev.sh@311 -- # local nbd_all 00:33:47.047 00:47:40 -- bdev/blockdev.sh@312 -- # bdev_num=1 00:33:47.047 00:47:40 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:33:47.047 00:47:40 -- bdev/blockdev.sh@314 -- # local nbd_list 00:33:47.047 00:47:40 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1') 00:33:47.047 00:47:40 -- bdev/blockdev.sh@315 -- # local bdev_list 00:33:47.047 00:47:40 -- bdev/blockdev.sh@318 -- # nbd_pid=146875 00:33:47.047 00:47:40 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:33:47.047 00:47:40 -- bdev/blockdev.sh@320 -- # waitforlisten 146875 /var/tmp/spdk-nbd.sock 00:33:47.047 00:47:40 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:47.047 00:47:40 -- common/autotest_common.sh@817 -- # '[' -z 146875 ']' 00:33:47.047 00:47:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:47.047 00:47:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:47.047 00:47:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:47.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:47.047 00:47:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:47.047 00:47:40 -- common/autotest_common.sh@10 -- # set +x 00:33:47.047 [2024-04-24 00:47:40.716748] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:47.047 [2024-04-24 00:47:40.716965] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.305 [2024-04-24 00:47:40.902288] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.563 [2024-04-24 00:47:41.122088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.159 00:47:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:48.159 00:47:41 -- common/autotest_common.sh@850 -- # return 0 00:33:48.159 00:47:41 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:33:48.159 00:47:41 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:48.159 00:47:41 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:33:48.159 00:47:41 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:33:48.159 00:47:41 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:33:48.159 00:47:41 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:48.159 00:47:41 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:33:48.159 00:47:41 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:33:48.159 00:47:41 -- bdev/nbd_common.sh@24 -- # local i 00:33:48.159 00:47:41 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:33:48.159 00:47:41 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:33:48.159 00:47:41 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:33:48.159 00:47:41 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:33:48.418 00:47:41 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:48.418 00:47:41 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:48.418 00:47:41 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:48.418 00:47:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:33:48.418 00:47:41 -- common/autotest_common.sh@855 -- # local i 00:33:48.418 00:47:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:48.418 00:47:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:48.418 00:47:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:33:48.418 00:47:41 -- common/autotest_common.sh@859 -- # break 00:33:48.418 00:47:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:48.418 00:47:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:48.418 00:47:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:48.418 1+0 records in 00:33:48.418 1+0 records out 00:33:48.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553171 s, 7.4 MB/s 00:33:48.418 00:47:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:48.418 00:47:42 -- common/autotest_common.sh@872 -- # size=4096 00:33:48.418 00:47:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:48.418 00:47:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:48.418 00:47:42 -- common/autotest_common.sh@875 -- # return 0 00:33:48.418 00:47:42 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:48.418 00:47:42 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:33:48.418 00:47:42 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:48.675 00:47:42 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:48.675 { 00:33:48.675 "nbd_device": "/dev/nbd0", 00:33:48.675 "bdev_name": "Nvme0n1" 00:33:48.675 } 00:33:48.675 ]' 00:33:48.675 00:47:42 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:48.675 00:47:42 -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:48.675 { 00:33:48.675 "nbd_device": "/dev/nbd0", 00:33:48.675 "bdev_name": "Nvme0n1" 00:33:48.675 } 00:33:48.675 ]' 00:33:48.676 00:47:42 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:48.676 00:47:42 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:48.676 00:47:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:48.676 00:47:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:48.676 00:47:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:48.676 00:47:42 -- bdev/nbd_common.sh@51 -- # local i 00:33:48.676 00:47:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:48.676 00:47:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:48.950 00:47:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:48.950 00:47:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:48.950 00:47:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:48.950 00:47:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:48.950 00:47:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:48.950 00:47:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:48.950 00:47:42 -- bdev/nbd_common.sh@41 -- # break 00:33:48.950 00:47:42 -- bdev/nbd_common.sh@45 -- # return 0 00:33:48.950 00:47:42 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:48.950 00:47:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:48.950 00:47:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:49.232 00:47:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@65 -- # true 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@65 -- # count=0 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@122 -- # count=0 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@127 -- # return 0 00:33:49.233 00:47:42 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@12 -- # local i 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:49.233 00:47:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:33:49.490 /dev/nbd0 00:33:49.490 00:47:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:49.490 00:47:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:49.490 00:47:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:33:49.490 00:47:43 -- common/autotest_common.sh@855 -- # local i 00:33:49.490 00:47:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:49.490 00:47:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:49.490 00:47:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:33:49.490 00:47:43 -- common/autotest_common.sh@859 -- # break 00:33:49.490 00:47:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:49.490 00:47:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:49.490 00:47:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:49.490 1+0 records in 00:33:49.490 1+0 records out 00:33:49.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477484 s, 8.6 MB/s 00:33:49.490 00:47:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:49.490 00:47:43 -- common/autotest_common.sh@872 -- # size=4096 00:33:49.490 00:47:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:49.490 00:47:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:49.490 00:47:43 -- common/autotest_common.sh@875 -- # return 0 00:33:49.490 00:47:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:49.490 00:47:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:49.490 00:47:43 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:49.490 00:47:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:49.490 00:47:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:49.748 00:47:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:49.748 { 00:33:49.748 "nbd_device": "/dev/nbd0", 00:33:49.748 "bdev_name": "Nvme0n1" 00:33:49.748 } 00:33:49.748 ]' 00:33:49.748 00:47:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:49.748 { 00:33:49.748 "nbd_device": "/dev/nbd0", 00:33:49.748 "bdev_name": "Nvme0n1" 00:33:49.748 } 00:33:49.748 ]' 00:33:49.748 00:47:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@65 -- # count=1 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@66 -- # echo 1 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@95 -- # count=1 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:50.006 256+0 records in 00:33:50.006 256+0 records out 00:33:50.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00740453 s, 142 MB/s 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:50.006 256+0 records in 00:33:50.006 256+0 records out 00:33:50.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0453317 s, 23.1 MB/s 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@51 -- # local i 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:50.006 00:47:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:50.264 00:47:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:50.264 00:47:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:50.264 00:47:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:50.264 00:47:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:50.264 00:47:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:50.264 00:47:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:50.264 00:47:43 -- bdev/nbd_common.sh@41 -- # break 00:33:50.264 00:47:43 -- bdev/nbd_common.sh@45 -- # return 0 00:33:50.264 00:47:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:50.264 00:47:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:50.264 00:47:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@65 -- # true 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@65 -- # count=0 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@104 -- # count=0 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@109 -- # return 0 00:33:50.522 00:47:44 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:33:50.522 00:47:44 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:51.087 malloc_lvol_verify 00:33:51.087 00:47:44 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:51.087 460f9125-6d67-431e-a399-221fb2457d5e 00:33:51.087 00:47:44 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:51.653 a003e621-1b42-435b-a046-8cd0021e8c6e 00:33:51.653 00:47:45 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:51.911 /dev/nbd0 00:33:51.911 00:47:45 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:33:51.911 mke2fs 1.46.5 (30-Dec-2021) 00:33:51.911 00:33:51.911 Filesystem too small for a journal 00:33:51.911 Discarding device blocks: 0/1024 done 00:33:51.911 Creating filesystem with 1024 4k blocks and 1024 inodes 00:33:51.911 00:33:51.911 Allocating group tables: 0/1 done 00:33:51.911 Writing inode tables: 0/1 done 00:33:51.911 Writing superblocks and filesystem accounting information: 0/1 done 00:33:51.911 00:33:51.911 00:47:45 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:33:51.911 00:47:45 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:51.911 00:47:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:51.911 00:47:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:51.911 00:47:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:51.911 00:47:45 -- bdev/nbd_common.sh@51 -- # local i 00:33:51.911 00:47:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:51.911 00:47:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:52.168 00:47:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:52.168 00:47:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:52.168 00:47:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:52.168 00:47:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:52.168 00:47:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:52.168 00:47:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:52.168 00:47:45 -- bdev/nbd_common.sh@41 -- # break 00:33:52.168 00:47:45 -- bdev/nbd_common.sh@45 -- # return 0 00:33:52.168 00:47:45 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:33:52.168 00:47:45 -- bdev/nbd_common.sh@147 -- # return 0 00:33:52.168 00:47:45 -- bdev/blockdev.sh@326 -- # killprocess 146875 00:33:52.168 00:47:45 -- common/autotest_common.sh@936 -- # '[' -z 146875 ']' 00:33:52.168 00:47:45 -- common/autotest_common.sh@940 -- # kill -0 146875 00:33:52.168 00:47:45 -- common/autotest_common.sh@941 -- # uname 00:33:52.168 00:47:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:52.168 00:47:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146875 00:33:52.168 00:47:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:52.168 00:47:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:52.168 00:47:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146875' 00:33:52.168 killing process with pid 146875 00:33:52.168 00:47:45 -- common/autotest_common.sh@955 -- # kill 146875 00:33:52.168 00:47:45 -- common/autotest_common.sh@960 -- # wait 146875 00:33:53.540 00:47:47 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:33:53.540 00:33:53.540 real 0m6.667s 00:33:53.540 user 0m9.516s 00:33:53.540 sys 0m1.526s 00:33:53.540 00:47:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:53.540 ************************************ 00:33:53.540 END TEST bdev_nbd 00:33:53.540 ************************************ 00:33:53.540 00:47:47 -- common/autotest_common.sh@10 -- # set +x 00:33:53.540 00:47:47 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:33:53.540 00:47:47 -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:33:53.540 00:47:47 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:33:53.540 skipping fio tests on NVMe due to multi-ns failures. 00:33:53.540 00:47:47 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:53.540 00:47:47 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:53.540 00:47:47 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:33:53.540 00:47:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:53.540 00:47:47 -- common/autotest_common.sh@10 -- # set +x 00:33:53.797 ************************************ 00:33:53.797 START TEST bdev_verify 00:33:53.797 ************************************ 00:33:53.797 00:47:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:53.797 [2024-04-24 00:47:47.472726] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:33:53.797 [2024-04-24 00:47:47.473137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147085 ] 00:33:54.055 [2024-04-24 00:47:47.658102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:54.312 [2024-04-24 00:47:47.886377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.312 [2024-04-24 00:47:47.886383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.879 Running I/O for 5 seconds... 00:34:00.149 00:34:00.149 Latency(us) 00:34:00.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.149 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:00.149 Verification LBA range: start 0x0 length 0xa0000 00:34:00.149 Nvme0n1 : 5.01 7813.25 30.52 0.00 0.00 16292.12 940.13 26339.23 00:34:00.149 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:00.149 Verification LBA range: start 0xa0000 length 0xa0000 00:34:00.149 Nvme0n1 : 5.01 8183.73 31.97 0.00 0.00 15554.63 1201.49 22719.15 00:34:00.149 =================================================================================================================== 00:34:00.149 Total : 15996.98 62.49 0.00 0.00 15914.84 940.13 26339.23 00:34:01.565 ************************************ 00:34:01.565 END TEST bdev_verify 00:34:01.565 ************************************ 00:34:01.565 00:34:01.565 real 0m7.602s 00:34:01.565 user 0m13.830s 00:34:01.565 sys 0m0.268s 00:34:01.565 00:47:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:01.565 00:47:54 -- common/autotest_common.sh@10 -- # set +x 00:34:01.565 00:47:55 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:01.565 00:47:55 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:34:01.565 00:47:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:01.565 00:47:55 -- common/autotest_common.sh@10 -- # set +x 00:34:01.565 ************************************ 00:34:01.565 START TEST bdev_verify_big_io 00:34:01.565 ************************************ 00:34:01.565 00:47:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:01.565 [2024-04-24 00:47:55.161701] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:01.565 [2024-04-24 00:47:55.162132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147191 ] 00:34:01.565 [2024-04-24 00:47:55.334016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:01.824 [2024-04-24 00:47:55.597554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.824 [2024-04-24 00:47:55.597559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.390 Running I/O for 5 seconds... 00:34:07.655 00:34:07.655 Latency(us) 00:34:07.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:07.655 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:07.655 Verification LBA range: start 0x0 length 0xa000 00:34:07.655 Nvme0n1 : 5.12 477.59 29.85 0.00 0.00 258241.51 1271.71 299593.14 00:34:07.655 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:07.655 Verification LBA range: start 0xa000 length 0xa000 00:34:07.655 Nvme0n1 : 5.12 549.86 34.37 0.00 0.00 225373.02 1263.91 250659.60 00:34:07.655 =================================================================================================================== 00:34:07.655 Total : 1027.44 64.22 0.00 0.00 240651.68 1263.91 299593.14 00:34:09.553 00:34:09.553 real 0m7.782s 00:34:09.553 user 0m14.224s 00:34:09.553 sys 0m0.232s 00:34:09.553 00:48:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:09.553 00:48:02 -- common/autotest_common.sh@10 -- # set +x 00:34:09.553 ************************************ 00:34:09.553 END TEST bdev_verify_big_io 00:34:09.553 ************************************ 00:34:09.553 00:48:02 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:09.553 00:48:02 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:34:09.553 00:48:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:09.553 00:48:02 -- common/autotest_common.sh@10 -- # set +x 00:34:09.553 ************************************ 00:34:09.553 START TEST bdev_write_zeroes 00:34:09.553 ************************************ 00:34:09.553 00:48:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:09.553 [2024-04-24 00:48:03.033179] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:09.553 [2024-04-24 00:48:03.033524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147310 ] 00:34:09.553 [2024-04-24 00:48:03.195907] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.812 [2024-04-24 00:48:03.408139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.073 Running I/O for 1 seconds... 00:34:11.457 00:34:11.457 Latency(us) 00:34:11.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.457 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:11.457 Nvme0n1 : 1.00 54312.13 212.16 0.00 0.00 2351.11 998.64 9050.21 00:34:11.457 =================================================================================================================== 00:34:11.457 Total : 54312.13 212.16 0.00 0.00 2351.11 998.64 9050.21 00:34:12.840 ************************************ 00:34:12.840 END TEST bdev_write_zeroes 00:34:12.840 ************************************ 00:34:12.840 00:34:12.840 real 0m3.459s 00:34:12.840 user 0m3.120s 00:34:12.840 sys 0m0.236s 00:34:12.840 00:48:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:12.840 00:48:06 -- common/autotest_common.sh@10 -- # set +x 00:34:12.840 00:48:06 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:12.840 00:48:06 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:34:12.840 00:48:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:12.840 00:48:06 -- common/autotest_common.sh@10 -- # set +x 00:34:12.840 ************************************ 00:34:12.840 START TEST bdev_json_nonenclosed 00:34:12.840 ************************************ 00:34:12.840 00:48:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:13.098 [2024-04-24 00:48:06.635432] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:13.098 [2024-04-24 00:48:06.635747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147377 ] 00:34:13.098 [2024-04-24 00:48:06.805508] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.357 [2024-04-24 00:48:07.037660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.357 [2024-04-24 00:48:07.037786] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:13.358 [2024-04-24 00:48:07.037832] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:13.358 [2024-04-24 00:48:07.037860] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:13.960 ************************************ 00:34:13.960 END TEST bdev_json_nonenclosed 00:34:13.960 ************************************ 00:34:13.960 00:34:13.960 real 0m1.010s 00:34:13.960 user 0m0.727s 00:34:13.960 sys 0m0.182s 00:34:13.960 00:48:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:13.960 00:48:07 -- common/autotest_common.sh@10 -- # set +x 00:34:13.960 00:48:07 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:13.960 00:48:07 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:34:13.960 00:48:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:13.960 00:48:07 -- common/autotest_common.sh@10 -- # set +x 00:34:13.960 ************************************ 00:34:13.960 START TEST bdev_json_nonarray 00:34:13.960 ************************************ 00:34:13.960 00:48:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:13.960 [2024-04-24 00:48:07.720345] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:13.960 [2024-04-24 00:48:07.720828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147413 ] 00:34:14.218 [2024-04-24 00:48:07.900877] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.475 [2024-04-24 00:48:08.140791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.475 [2024-04-24 00:48:08.141206] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:14.475 [2024-04-24 00:48:08.141403] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:14.475 [2024-04-24 00:48:08.141587] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:15.040 ************************************ 00:34:15.040 END TEST bdev_json_nonarray 00:34:15.040 ************************************ 00:34:15.040 00:34:15.040 real 0m1.013s 00:34:15.040 user 0m0.736s 00:34:15.040 sys 0m0.175s 00:34:15.040 00:48:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:15.040 00:48:08 -- common/autotest_common.sh@10 -- # set +x 00:34:15.041 00:48:08 -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:34:15.041 00:48:08 -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:34:15.041 00:48:08 -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:34:15.041 00:48:08 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:34:15.041 00:48:08 -- bdev/blockdev.sh@811 -- # cleanup 00:34:15.041 00:48:08 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:15.041 00:48:08 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:15.041 00:48:08 -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:34:15.041 00:48:08 -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:34:15.041 00:48:08 -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:34:15.041 00:48:08 -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:34:15.041 ************************************ 00:34:15.041 END TEST blockdev_nvme 00:34:15.041 ************************************ 00:34:15.041 00:34:15.041 real 0m38.715s 00:34:15.041 user 0m55.950s 00:34:15.041 sys 0m4.528s 00:34:15.041 00:48:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:15.041 00:48:08 -- common/autotest_common.sh@10 -- # set +x 00:34:15.041 00:48:08 -- spdk/autotest.sh@209 -- # uname -s 00:34:15.041 00:48:08 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:34:15.041 00:48:08 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:34:15.041 00:48:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:34:15.041 00:48:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:15.041 00:48:08 -- common/autotest_common.sh@10 -- # set +x 00:34:15.041 ************************************ 00:34:15.041 START TEST blockdev_nvme_gpt 00:34:15.041 ************************************ 00:34:15.041 00:48:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:34:15.299 * Looking for test storage... 00:34:15.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:34:15.299 00:48:08 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:34:15.299 00:48:08 -- bdev/nbd_common.sh@6 -- # set -e 00:34:15.299 00:48:08 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:34:15.299 00:48:08 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:15.299 00:48:08 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:34:15.299 00:48:08 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:34:15.299 00:48:08 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:34:15.299 00:48:08 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:34:15.299 00:48:08 -- bdev/blockdev.sh@20 -- # : 00:34:15.299 00:48:08 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:34:15.299 00:48:08 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:34:15.299 00:48:08 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:34:15.299 00:48:08 -- bdev/blockdev.sh@674 -- # uname -s 00:34:15.299 00:48:08 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:34:15.299 00:48:08 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:34:15.299 00:48:08 -- bdev/blockdev.sh@682 -- # test_type=gpt 00:34:15.299 00:48:08 -- bdev/blockdev.sh@683 -- # crypto_device= 00:34:15.299 00:48:08 -- bdev/blockdev.sh@684 -- # dek= 00:34:15.299 00:48:08 -- bdev/blockdev.sh@685 -- # env_ctx= 00:34:15.299 00:48:08 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:34:15.299 00:48:08 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:34:15.299 00:48:08 -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:34:15.299 00:48:08 -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:34:15.299 00:48:08 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:34:15.299 00:48:08 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=147512 00:34:15.299 00:48:08 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:15.299 00:48:08 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:34:15.299 00:48:08 -- bdev/blockdev.sh@49 -- # waitforlisten 147512 00:34:15.299 00:48:08 -- common/autotest_common.sh@817 -- # '[' -z 147512 ']' 00:34:15.299 00:48:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.299 00:48:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:15.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.299 00:48:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.299 00:48:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:15.299 00:48:08 -- common/autotest_common.sh@10 -- # set +x 00:34:15.299 [2024-04-24 00:48:08.993368] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:15.299 [2024-04-24 00:48:08.993842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147512 ] 00:34:15.557 [2024-04-24 00:48:09.157024] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.814 [2024-04-24 00:48:09.395060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.748 00:48:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:16.748 00:48:10 -- common/autotest_common.sh@850 -- # return 0 00:34:16.748 00:48:10 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:34:16.748 00:48:10 -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:34:16.748 00:48:10 -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:17.006 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:34:17.006 Waiting for block devices as requested 00:34:17.264 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:17.264 00:48:10 -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:34:17.264 00:48:10 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:34:17.264 00:48:10 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:34:17.264 00:48:10 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:34:17.264 00:48:10 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:34:17.264 00:48:10 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:34:17.264 00:48:10 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:17.264 00:48:10 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:17.264 00:48:10 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:17.264 00:48:10 -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme0/nvme0n1') 00:34:17.264 00:48:10 -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:34:17.264 00:48:10 -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:34:17.264 00:48:10 -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:34:17.264 00:48:10 -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:34:17.264 00:48:10 -- bdev/blockdev.sh@112 -- # dev=/dev/nvme0n1 00:34:17.264 00:48:10 -- bdev/blockdev.sh@113 -- # parted /dev/nvme0n1 -ms print 00:34:17.264 00:48:10 -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:34:17.264 BYT; 00:34:17.264 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:34:17.264 00:48:10 -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:34:17.264 BYT; 00:34:17.264 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:34:17.264 00:48:10 -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme0n1 00:34:17.264 00:48:10 -- bdev/blockdev.sh@116 -- # break 00:34:17.264 00:48:10 -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme0n1 ]] 00:34:17.264 00:48:10 -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:34:17.264 00:48:10 -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:34:17.264 00:48:10 -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:34:17.831 00:48:11 -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:34:17.831 00:48:11 -- scripts/common.sh@408 -- # local spdk_guid 00:34:17.831 00:48:11 -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:34:17.831 00:48:11 -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:34:17.831 00:48:11 -- scripts/common.sh@413 -- # IFS='()' 00:34:17.831 00:48:11 -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:34:17.831 00:48:11 -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:34:17.831 00:48:11 -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:34:17.831 00:48:11 -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:34:17.831 00:48:11 -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:34:17.831 00:48:11 -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:34:17.831 00:48:11 -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:34:17.831 00:48:11 -- scripts/common.sh@420 -- # local spdk_guid 00:34:17.831 00:48:11 -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:34:17.831 00:48:11 -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:34:17.831 00:48:11 -- scripts/common.sh@425 -- # IFS='()' 00:34:17.831 00:48:11 -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:34:17.831 00:48:11 -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:34:17.831 00:48:11 -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:34:17.831 00:48:11 -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:34:17.831 00:48:11 -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:34:17.831 00:48:11 -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:34:17.831 00:48:11 -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:34:18.767 The operation has completed successfully. 00:34:18.767 00:48:12 -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:34:19.702 The operation has completed successfully. 00:34:19.702 00:48:13 -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:20.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:34:20.268 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:34:21.204 00:48:14 -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:34:21.204 00:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.204 00:48:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.204 [] 00:34:21.204 00:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.204 00:48:14 -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:34:21.204 00:48:14 -- bdev/blockdev.sh@81 -- # local json 00:34:21.204 00:48:14 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:34:21.204 00:48:14 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:34:21.204 00:48:14 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:34:21.204 00:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.204 00:48:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.204 00:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.204 00:48:14 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:34:21.204 00:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.204 00:48:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.204 00:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.204 00:48:14 -- bdev/blockdev.sh@740 -- # cat 00:34:21.204 00:48:14 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:34:21.204 00:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.204 00:48:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.204 00:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.204 00:48:14 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:34:21.204 00:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.204 00:48:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.204 00:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.204 00:48:14 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:34:21.204 00:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.204 00:48:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.204 00:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.204 00:48:14 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:34:21.204 00:48:14 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:34:21.204 00:48:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:34:21.204 00:48:14 -- common/autotest_common.sh@10 -- # set +x 00:34:21.204 00:48:14 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:34:21.204 00:48:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:34:21.204 00:48:14 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:34:21.204 00:48:14 -- bdev/blockdev.sh@749 -- # jq -r .name 00:34:21.204 00:48:14 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:34:21.204 00:48:14 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:34:21.204 00:48:14 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:34:21.204 00:48:14 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:34:21.204 00:48:14 -- bdev/blockdev.sh@754 -- # killprocess 147512 00:34:21.204 00:48:14 -- common/autotest_common.sh@936 -- # '[' -z 147512 ']' 00:34:21.204 00:48:14 -- common/autotest_common.sh@940 -- # kill -0 147512 00:34:21.204 00:48:14 -- common/autotest_common.sh@941 -- # uname 00:34:21.204 00:48:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:21.204 00:48:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 147512 00:34:21.204 00:48:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:21.204 00:48:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:21.204 00:48:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 147512' 00:34:21.204 killing process with pid 147512 00:34:21.204 00:48:14 -- common/autotest_common.sh@955 -- # kill 147512 00:34:21.204 00:48:14 -- common/autotest_common.sh@960 -- # wait 147512 00:34:24.488 00:48:17 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:24.488 00:48:17 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:34:24.488 00:48:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:34:24.488 00:48:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:24.488 00:48:17 -- common/autotest_common.sh@10 -- # set +x 00:34:24.488 ************************************ 00:34:24.488 START TEST bdev_hello_world 00:34:24.488 ************************************ 00:34:24.488 00:48:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:34:24.488 [2024-04-24 00:48:17.796205] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:24.488 [2024-04-24 00:48:17.796362] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147967 ] 00:34:24.488 [2024-04-24 00:48:17.966886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.488 [2024-04-24 00:48:18.275733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.054 [2024-04-24 00:48:18.847084] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:34:25.054 [2024-04-24 00:48:18.847172] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:34:25.054 [2024-04-24 00:48:18.847222] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:34:25.311 [2024-04-24 00:48:18.850800] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:34:25.311 [2024-04-24 00:48:18.851399] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:34:25.311 [2024-04-24 00:48:18.851465] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:34:25.311 [2024-04-24 00:48:18.851753] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:34:25.311 00:34:25.311 [2024-04-24 00:48:18.851805] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:34:26.686 00:34:26.686 real 0m2.690s 00:34:26.686 user 0m2.350s 00:34:26.686 sys 0m0.241s 00:34:26.686 00:48:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:26.686 00:48:20 -- common/autotest_common.sh@10 -- # set +x 00:34:26.686 ************************************ 00:34:26.686 END TEST bdev_hello_world 00:34:26.686 ************************************ 00:34:26.686 00:48:20 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:34:26.686 00:48:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:34:26.686 00:48:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:26.686 00:48:20 -- common/autotest_common.sh@10 -- # set +x 00:34:26.945 ************************************ 00:34:26.945 START TEST bdev_bounds 00:34:26.945 ************************************ 00:34:26.945 00:48:20 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:34:26.945 00:48:20 -- bdev/blockdev.sh@290 -- # bdevio_pid=148030 00:34:26.945 00:48:20 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:26.945 00:48:20 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:34:26.945 Process bdevio pid: 148030 00:34:26.945 00:48:20 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 148030' 00:34:26.945 00:48:20 -- bdev/blockdev.sh@293 -- # waitforlisten 148030 00:34:26.945 00:48:20 -- common/autotest_common.sh@817 -- # '[' -z 148030 ']' 00:34:26.945 00:48:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.945 00:48:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:26.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.945 00:48:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.945 00:48:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:26.945 00:48:20 -- common/autotest_common.sh@10 -- # set +x 00:34:26.945 [2024-04-24 00:48:20.583416] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:26.945 [2024-04-24 00:48:20.583596] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148030 ] 00:34:27.202 [2024-04-24 00:48:20.757419] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:27.202 [2024-04-24 00:48:20.989859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:27.202 [2024-04-24 00:48:20.990003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.202 [2024-04-24 00:48:20.990012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:27.768 00:48:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:27.768 00:48:21 -- common/autotest_common.sh@850 -- # return 0 00:34:27.768 00:48:21 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:34:28.026 I/O targets: 00:34:28.026 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:34:28.026 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:34:28.026 00:34:28.026 00:34:28.026 CUnit - A unit testing framework for C - Version 2.1-3 00:34:28.026 http://cunit.sourceforge.net/ 00:34:28.026 00:34:28.026 00:34:28.026 Suite: bdevio tests on: Nvme0n1p2 00:34:28.026 Test: blockdev write read block ...passed 00:34:28.026 Test: blockdev write zeroes read block ...passed 00:34:28.026 Test: blockdev write zeroes read no split ...passed 00:34:28.026 Test: blockdev write zeroes read split ...passed 00:34:28.026 Test: blockdev write zeroes read split partial ...passed 00:34:28.026 Test: blockdev reset ...[2024-04-24 00:48:21.757611] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:34:28.026 [2024-04-24 00:48:21.761757] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:28.026 passed 00:34:28.026 Test: blockdev write read 8 blocks ...passed 00:34:28.026 Test: blockdev write read size > 128k ...passed 00:34:28.026 Test: blockdev write read invalid size ...passed 00:34:28.026 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:28.026 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:28.026 Test: blockdev write read max offset ...passed 00:34:28.026 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:28.026 Test: blockdev writev readv 8 blocks ...passed 00:34:28.026 Test: blockdev writev readv 30 x 1block ...passed 00:34:28.026 Test: blockdev writev readv block ...passed 00:34:28.026 Test: blockdev writev readv size > 128k ...passed 00:34:28.026 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:28.026 Test: blockdev comparev and writev ...[2024-04-24 00:48:21.770385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ac0b000 len:0x1000 00:34:28.026 [2024-04-24 00:48:21.770495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:28.026 passed 00:34:28.026 Test: blockdev nvme passthru rw ...passed 00:34:28.026 Test: blockdev nvme passthru vendor specific ...passed 00:34:28.026 Test: blockdev nvme admin passthru ...passed 00:34:28.026 Test: blockdev copy ...passed 00:34:28.026 Suite: bdevio tests on: Nvme0n1p1 00:34:28.026 Test: blockdev write read block ...passed 00:34:28.026 Test: blockdev write zeroes read block ...passed 00:34:28.026 Test: blockdev write zeroes read no split ...passed 00:34:28.026 Test: blockdev write zeroes read split ...passed 00:34:28.284 Test: blockdev write zeroes read split partial ...passed 00:34:28.284 Test: blockdev reset ...[2024-04-24 00:48:21.847228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:34:28.284 [2024-04-24 00:48:21.851394] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:28.284 passed 00:34:28.284 Test: blockdev write read 8 blocks ...passed 00:34:28.284 Test: blockdev write read size > 128k ...passed 00:34:28.284 Test: blockdev write read invalid size ...passed 00:34:28.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:28.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:28.284 Test: blockdev write read max offset ...passed 00:34:28.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:28.284 Test: blockdev writev readv 8 blocks ...passed 00:34:28.284 Test: blockdev writev readv 30 x 1block ...passed 00:34:28.284 Test: blockdev writev readv block ...passed 00:34:28.284 Test: blockdev writev readv size > 128k ...passed 00:34:28.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:28.284 Test: blockdev comparev and writev ...[2024-04-24 00:48:21.859422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ac0d000 len:0x1000 00:34:28.284 [2024-04-24 00:48:21.859524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:34:28.284 passed 00:34:28.284 Test: blockdev nvme passthru rw ...passed 00:34:28.284 Test: blockdev nvme passthru vendor specific ...passed 00:34:28.284 Test: blockdev nvme admin passthru ...passed 00:34:28.284 Test: blockdev copy ...passed 00:34:28.284 00:34:28.284 Run Summary: Type Total Ran Passed Failed Inactive 00:34:28.284 suites 2 2 n/a 0 0 00:34:28.284 tests 46 46 46 0 0 00:34:28.284 asserts 284 284 284 0 n/a 00:34:28.284 00:34:28.284 Elapsed time = 0.503 seconds 00:34:28.284 0 00:34:28.284 00:48:21 -- bdev/blockdev.sh@295 -- # killprocess 148030 00:34:28.284 00:48:21 -- common/autotest_common.sh@936 -- # '[' -z 148030 ']' 00:34:28.284 00:48:21 -- common/autotest_common.sh@940 -- # kill -0 148030 00:34:28.284 00:48:21 -- common/autotest_common.sh@941 -- # uname 00:34:28.284 00:48:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:28.284 00:48:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148030 00:34:28.284 00:48:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:28.284 killing process with pid 148030 00:34:28.284 00:48:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:28.284 00:48:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148030' 00:34:28.284 00:48:21 -- common/autotest_common.sh@955 -- # kill 148030 00:34:28.284 00:48:21 -- common/autotest_common.sh@960 -- # wait 148030 00:34:29.663 00:48:23 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:34:29.663 00:34:29.663 real 0m2.742s 00:34:29.663 user 0m6.532s 00:34:29.663 sys 0m0.359s 00:34:29.663 00:48:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:29.663 00:48:23 -- common/autotest_common.sh@10 -- # set +x 00:34:29.663 ************************************ 00:34:29.663 END TEST bdev_bounds 00:34:29.663 ************************************ 00:34:29.663 00:48:23 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:34:29.663 00:48:23 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:34:29.663 00:48:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:29.663 00:48:23 -- common/autotest_common.sh@10 -- # set +x 00:34:29.663 ************************************ 00:34:29.663 START TEST bdev_nbd 00:34:29.663 ************************************ 00:34:29.663 00:48:23 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:34:29.663 00:48:23 -- bdev/blockdev.sh@300 -- # uname -s 00:34:29.663 00:48:23 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:34:29.663 00:48:23 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:29.663 00:48:23 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:29.663 00:48:23 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:34:29.663 00:48:23 -- bdev/blockdev.sh@304 -- # local bdev_all 00:34:29.663 00:48:23 -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:34:29.663 00:48:23 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:34:29.663 00:48:23 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:34:29.663 00:48:23 -- bdev/blockdev.sh@311 -- # local nbd_all 00:34:29.663 00:48:23 -- bdev/blockdev.sh@312 -- # bdev_num=2 00:34:29.663 00:48:23 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:29.663 00:48:23 -- bdev/blockdev.sh@314 -- # local nbd_list 00:34:29.663 00:48:23 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:34:29.663 00:48:23 -- bdev/blockdev.sh@315 -- # local bdev_list 00:34:29.663 00:48:23 -- bdev/blockdev.sh@318 -- # nbd_pid=148103 00:34:29.663 00:48:23 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:34:29.663 00:48:23 -- bdev/blockdev.sh@320 -- # waitforlisten 148103 /var/tmp/spdk-nbd.sock 00:34:29.664 00:48:23 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:29.664 00:48:23 -- common/autotest_common.sh@817 -- # '[' -z 148103 ']' 00:34:29.664 00:48:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:34:29.664 00:48:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:29.664 00:48:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:34:29.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:34:29.664 00:48:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:29.664 00:48:23 -- common/autotest_common.sh@10 -- # set +x 00:34:29.664 [2024-04-24 00:48:23.437220] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:29.664 [2024-04-24 00:48:23.437370] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:29.923 [2024-04-24 00:48:23.606434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.181 [2024-04-24 00:48:23.826292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.747 00:48:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:30.747 00:48:24 -- common/autotest_common.sh@850 -- # return 0 00:34:30.747 00:48:24 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:34:30.747 00:48:24 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:30.747 00:48:24 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:34:30.747 00:48:24 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:34:30.747 00:48:24 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:34:30.747 00:48:24 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:30.747 00:48:24 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:34:30.747 00:48:24 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:34:30.747 00:48:24 -- bdev/nbd_common.sh@24 -- # local i 00:34:30.747 00:48:24 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:34:30.747 00:48:24 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:34:30.747 00:48:24 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:34:30.747 00:48:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:34:31.033 00:48:24 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:34:31.033 00:48:24 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:34:31.033 00:48:24 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:34:31.033 00:48:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:34:31.033 00:48:24 -- common/autotest_common.sh@855 -- # local i 00:34:31.033 00:48:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:31.033 00:48:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:31.033 00:48:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:34:31.033 00:48:24 -- common/autotest_common.sh@859 -- # break 00:34:31.033 00:48:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:31.033 00:48:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:31.033 00:48:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:31.033 1+0 records in 00:34:31.033 1+0 records out 00:34:31.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548545 s, 7.5 MB/s 00:34:31.033 00:48:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:31.033 00:48:24 -- common/autotest_common.sh@872 -- # size=4096 00:34:31.033 00:48:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:31.033 00:48:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:31.033 00:48:24 -- common/autotest_common.sh@875 -- # return 0 00:34:31.033 00:48:24 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:31.033 00:48:24 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:34:31.033 00:48:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:34:31.292 00:48:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:34:31.292 00:48:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:34:31.292 00:48:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:34:31.292 00:48:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:34:31.292 00:48:25 -- common/autotest_common.sh@855 -- # local i 00:34:31.292 00:48:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:31.292 00:48:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:31.292 00:48:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:34:31.292 00:48:25 -- common/autotest_common.sh@859 -- # break 00:34:31.292 00:48:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:31.292 00:48:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:31.292 00:48:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:31.292 1+0 records in 00:34:31.292 1+0 records out 00:34:31.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000749352 s, 5.5 MB/s 00:34:31.292 00:48:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:31.292 00:48:25 -- common/autotest_common.sh@872 -- # size=4096 00:34:31.292 00:48:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:31.292 00:48:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:31.292 00:48:25 -- common/autotest_common.sh@875 -- # return 0 00:34:31.292 00:48:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:31.292 00:48:25 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:34:31.292 00:48:25 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:31.551 00:48:25 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:34:31.551 { 00:34:31.551 "nbd_device": "/dev/nbd0", 00:34:31.551 "bdev_name": "Nvme0n1p1" 00:34:31.551 }, 00:34:31.551 { 00:34:31.551 "nbd_device": "/dev/nbd1", 00:34:31.551 "bdev_name": "Nvme0n1p2" 00:34:31.551 } 00:34:31.551 ]' 00:34:31.551 00:48:25 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:34:31.551 00:48:25 -- bdev/nbd_common.sh@119 -- # echo '[ 00:34:31.551 { 00:34:31.551 "nbd_device": "/dev/nbd0", 00:34:31.551 "bdev_name": "Nvme0n1p1" 00:34:31.551 }, 00:34:31.551 { 00:34:31.551 "nbd_device": "/dev/nbd1", 00:34:31.551 "bdev_name": "Nvme0n1p2" 00:34:31.551 } 00:34:31.551 ]' 00:34:31.551 00:48:25 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:34:31.551 00:48:25 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:34:31.551 00:48:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:31.551 00:48:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:31.551 00:48:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:31.551 00:48:25 -- bdev/nbd_common.sh@51 -- # local i 00:34:31.551 00:48:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:31.551 00:48:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:31.810 00:48:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:31.810 00:48:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:31.810 00:48:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:31.810 00:48:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:31.810 00:48:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:31.810 00:48:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:31.810 00:48:25 -- bdev/nbd_common.sh@41 -- # break 00:34:31.810 00:48:25 -- bdev/nbd_common.sh@45 -- # return 0 00:34:31.810 00:48:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:31.810 00:48:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:32.067 00:48:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:32.067 00:48:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:32.067 00:48:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:32.067 00:48:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:32.067 00:48:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:32.067 00:48:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:32.067 00:48:25 -- bdev/nbd_common.sh@41 -- # break 00:34:32.067 00:48:25 -- bdev/nbd_common.sh@45 -- # return 0 00:34:32.067 00:48:25 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:32.067 00:48:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:32.067 00:48:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:32.325 00:48:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:32.325 00:48:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:32.325 00:48:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@65 -- # true 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@65 -- # count=0 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@122 -- # count=0 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@127 -- # return 0 00:34:32.584 00:48:26 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@12 -- # local i 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:32.584 00:48:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:34:32.846 /dev/nbd0 00:34:32.847 00:48:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:32.847 00:48:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:32.847 00:48:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:34:32.847 00:48:26 -- common/autotest_common.sh@855 -- # local i 00:34:32.847 00:48:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:32.847 00:48:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:32.847 00:48:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:34:32.847 00:48:26 -- common/autotest_common.sh@859 -- # break 00:34:32.847 00:48:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:32.847 00:48:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:32.847 00:48:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:32.847 1+0 records in 00:34:32.847 1+0 records out 00:34:32.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382057 s, 10.7 MB/s 00:34:32.847 00:48:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:32.847 00:48:26 -- common/autotest_common.sh@872 -- # size=4096 00:34:32.847 00:48:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:32.847 00:48:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:32.847 00:48:26 -- common/autotest_common.sh@875 -- # return 0 00:34:32.847 00:48:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:32.847 00:48:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:32.847 00:48:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:34:33.121 /dev/nbd1 00:34:33.121 00:48:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:33.121 00:48:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:33.121 00:48:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:34:33.121 00:48:26 -- common/autotest_common.sh@855 -- # local i 00:34:33.121 00:48:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:34:33.121 00:48:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:34:33.121 00:48:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:34:33.121 00:48:26 -- common/autotest_common.sh@859 -- # break 00:34:33.121 00:48:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:34:33.121 00:48:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:34:33.121 00:48:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:33.122 1+0 records in 00:34:33.122 1+0 records out 00:34:33.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618195 s, 6.6 MB/s 00:34:33.122 00:48:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:33.122 00:48:26 -- common/autotest_common.sh@872 -- # size=4096 00:34:33.122 00:48:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:33.122 00:48:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:34:33.122 00:48:26 -- common/autotest_common.sh@875 -- # return 0 00:34:33.122 00:48:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:33.122 00:48:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:33.122 00:48:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:33.122 00:48:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:33.122 00:48:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:33.379 00:48:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:33.380 { 00:34:33.380 "nbd_device": "/dev/nbd0", 00:34:33.380 "bdev_name": "Nvme0n1p1" 00:34:33.380 }, 00:34:33.380 { 00:34:33.380 "nbd_device": "/dev/nbd1", 00:34:33.380 "bdev_name": "Nvme0n1p2" 00:34:33.380 } 00:34:33.380 ]' 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:33.380 { 00:34:33.380 "nbd_device": "/dev/nbd0", 00:34:33.380 "bdev_name": "Nvme0n1p1" 00:34:33.380 }, 00:34:33.380 { 00:34:33.380 "nbd_device": "/dev/nbd1", 00:34:33.380 "bdev_name": "Nvme0n1p2" 00:34:33.380 } 00:34:33.380 ]' 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:34:33.380 /dev/nbd1' 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:34:33.380 /dev/nbd1' 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@65 -- # count=2 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@95 -- # count=2 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:34:33.380 00:48:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:34:33.638 256+0 records in 00:34:33.638 256+0 records out 00:34:33.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00990711 s, 106 MB/s 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:34:33.638 256+0 records in 00:34:33.638 256+0 records out 00:34:33.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0699018 s, 15.0 MB/s 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:34:33.638 256+0 records in 00:34:33.638 256+0 records out 00:34:33.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0753812 s, 13.9 MB/s 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@51 -- # local i 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:33.638 00:48:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:33.896 00:48:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:33.896 00:48:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:33.896 00:48:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:33.896 00:48:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:33.896 00:48:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:33.896 00:48:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:33.896 00:48:27 -- bdev/nbd_common.sh@41 -- # break 00:34:33.896 00:48:27 -- bdev/nbd_common.sh@45 -- # return 0 00:34:33.896 00:48:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:33.896 00:48:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:34.154 00:48:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:34.412 00:48:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:34.412 00:48:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:34.412 00:48:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:34.412 00:48:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:34.412 00:48:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:34.412 00:48:27 -- bdev/nbd_common.sh@41 -- # break 00:34:34.412 00:48:27 -- bdev/nbd_common.sh@45 -- # return 0 00:34:34.412 00:48:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:34.412 00:48:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:34.412 00:48:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@65 -- # true 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@65 -- # count=0 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@104 -- # count=0 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@109 -- # return 0 00:34:34.671 00:48:28 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:34:34.671 00:48:28 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:34:34.929 malloc_lvol_verify 00:34:34.929 00:48:28 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:34:35.495 8f4c747b-bf55-4418-bae1-c4e9127b2d85 00:34:35.495 00:48:29 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:34:35.495 61522cbb-5526-4904-bc0a-b7c5629d7e27 00:34:35.495 00:48:29 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:34:36.061 /dev/nbd0 00:34:36.061 00:48:29 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:34:36.061 mke2fs 1.46.5 (30-Dec-2021) 00:34:36.061 00:34:36.061 Filesystem too small for a journal 00:34:36.061 Discarding device blocks: 0/1024 done 00:34:36.061 Creating filesystem with 1024 4k blocks and 1024 inodes 00:34:36.061 00:34:36.061 Allocating group tables: 0/1 done 00:34:36.061 Writing inode tables: 0/1 done 00:34:36.061 Writing superblocks and filesystem accounting information: 0/1 done 00:34:36.061 00:34:36.061 00:48:29 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:34:36.061 00:48:29 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:36.061 00:48:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:36.061 00:48:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:36.061 00:48:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:36.061 00:48:29 -- bdev/nbd_common.sh@51 -- # local i 00:34:36.061 00:48:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:36.061 00:48:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:36.319 00:48:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:36.319 00:48:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:36.319 00:48:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:36.319 00:48:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:36.319 00:48:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:36.319 00:48:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:36.319 00:48:29 -- bdev/nbd_common.sh@41 -- # break 00:34:36.319 00:48:29 -- bdev/nbd_common.sh@45 -- # return 0 00:34:36.319 00:48:29 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:34:36.319 00:48:29 -- bdev/nbd_common.sh@147 -- # return 0 00:34:36.319 00:48:29 -- bdev/blockdev.sh@326 -- # killprocess 148103 00:34:36.319 00:48:29 -- common/autotest_common.sh@936 -- # '[' -z 148103 ']' 00:34:36.319 00:48:29 -- common/autotest_common.sh@940 -- # kill -0 148103 00:34:36.319 00:48:29 -- common/autotest_common.sh@941 -- # uname 00:34:36.319 00:48:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:36.319 00:48:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148103 00:34:36.319 00:48:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:36.319 00:48:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:36.319 00:48:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148103' 00:34:36.319 killing process with pid 148103 00:34:36.319 00:48:29 -- common/autotest_common.sh@955 -- # kill 148103 00:34:36.319 00:48:29 -- common/autotest_common.sh@960 -- # wait 148103 00:34:37.696 ************************************ 00:34:37.697 END TEST bdev_nbd 00:34:37.697 ************************************ 00:34:37.697 00:48:31 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:34:37.697 00:34:37.697 real 0m7.994s 00:34:37.697 user 0m11.388s 00:34:37.697 sys 0m2.095s 00:34:37.697 00:48:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:37.697 00:48:31 -- common/autotest_common.sh@10 -- # set +x 00:34:37.697 00:48:31 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:34:37.697 00:48:31 -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:34:37.697 00:48:31 -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:34:37.697 skipping fio tests on NVMe due to multi-ns failures. 00:34:37.697 00:48:31 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:34:37.697 00:48:31 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:37.697 00:48:31 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:37.697 00:48:31 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:34:37.697 00:48:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:37.697 00:48:31 -- common/autotest_common.sh@10 -- # set +x 00:34:37.697 ************************************ 00:34:37.697 START TEST bdev_verify 00:34:37.697 ************************************ 00:34:37.697 00:48:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:37.955 [2024-04-24 00:48:31.528978] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:37.955 [2024-04-24 00:48:31.529160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148374 ] 00:34:37.955 [2024-04-24 00:48:31.698967] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:38.213 [2024-04-24 00:48:31.980295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.213 [2024-04-24 00:48:31.980296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.779 Running I/O for 5 seconds... 00:34:44.096 00:34:44.096 Latency(us) 00:34:44.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.096 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:44.096 Verification LBA range: start 0x0 length 0x4ff80 00:34:44.096 Nvme0n1p1 : 5.01 4237.94 16.55 0.00 0.00 30089.11 4712.35 32705.58 00:34:44.096 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:44.096 Verification LBA range: start 0x4ff80 length 0x4ff80 00:34:44.096 Nvme0n1p1 : 5.02 4361.54 17.04 0.00 0.00 29247.62 3713.71 33953.89 00:34:44.096 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:44.096 Verification LBA range: start 0x0 length 0x4ff7f 00:34:44.096 Nvme0n1p2 : 5.03 4252.69 16.61 0.00 0.00 29916.17 1763.23 30333.81 00:34:44.096 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:44.096 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:34:44.096 Nvme0n1p2 : 5.02 4359.67 17.03 0.00 0.00 29184.09 3994.58 30333.81 00:34:44.096 =================================================================================================================== 00:34:44.096 Total : 17211.85 67.23 0.00 0.00 29603.87 1763.23 33953.89 00:34:45.472 00:34:45.472 real 0m7.790s 00:34:45.472 user 0m14.201s 00:34:45.472 sys 0m0.265s 00:34:45.472 00:48:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:45.472 ************************************ 00:34:45.472 END TEST bdev_verify 00:34:45.472 ************************************ 00:34:45.472 00:48:39 -- common/autotest_common.sh@10 -- # set +x 00:34:45.730 00:48:39 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:45.730 00:48:39 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:34:45.730 00:48:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:45.730 00:48:39 -- common/autotest_common.sh@10 -- # set +x 00:34:45.730 ************************************ 00:34:45.730 START TEST bdev_verify_big_io 00:34:45.730 ************************************ 00:34:45.730 00:48:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:45.730 [2024-04-24 00:48:39.441763] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:45.730 [2024-04-24 00:48:39.441997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148485 ] 00:34:45.987 [2024-04-24 00:48:39.637977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:46.246 [2024-04-24 00:48:39.921159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.246 [2024-04-24 00:48:39.921156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.830 Running I/O for 5 seconds... 00:34:52.127 00:34:52.128 Latency(us) 00:34:52.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:52.128 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:52.128 Verification LBA range: start 0x0 length 0x4ff8 00:34:52.128 Nvme0n1p1 : 5.16 421.82 26.36 0.00 0.00 295463.05 6147.90 359511.77 00:34:52.128 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:52.128 Verification LBA range: start 0x4ff8 length 0x4ff8 00:34:52.128 Nvme0n1p1 : 5.18 419.88 26.24 0.00 0.00 297030.60 6865.68 361509.06 00:34:52.128 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:52.128 Verification LBA range: start 0x0 length 0x4ff7 00:34:52.128 Nvme0n1p2 : 5.23 440.70 27.54 0.00 0.00 274496.20 1646.20 355517.20 00:34:52.128 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:52.128 Verification LBA range: start 0x4ff7 length 0x4ff7 00:34:52.128 Nvme0n1p2 : 5.24 436.74 27.30 0.00 0.00 277145.21 1279.51 365503.63 00:34:52.128 =================================================================================================================== 00:34:52.128 Total : 1719.14 107.45 0.00 0.00 285756.38 1279.51 365503.63 00:34:54.045 00:34:54.045 real 0m7.969s 00:34:54.045 user 0m14.532s 00:34:54.045 sys 0m0.286s 00:34:54.045 00:48:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:54.045 ************************************ 00:34:54.045 END TEST bdev_verify_big_io 00:34:54.045 ************************************ 00:34:54.045 00:48:47 -- common/autotest_common.sh@10 -- # set +x 00:34:54.045 00:48:47 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:54.045 00:48:47 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:34:54.045 00:48:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:54.045 00:48:47 -- common/autotest_common.sh@10 -- # set +x 00:34:54.045 ************************************ 00:34:54.045 START TEST bdev_write_zeroes 00:34:54.045 ************************************ 00:34:54.045 00:48:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:54.045 [2024-04-24 00:48:47.506821] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:54.045 [2024-04-24 00:48:47.507106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148604 ] 00:34:54.045 [2024-04-24 00:48:47.688033] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.304 [2024-04-24 00:48:47.894205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.562 Running I/O for 1 seconds... 00:34:55.967 00:34:55.967 Latency(us) 00:34:55.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.967 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:55.967 Nvme0n1p1 : 1.00 28675.35 112.01 0.00 0.00 4455.09 2371.78 11484.40 00:34:55.967 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:55.967 Nvme0n1p2 : 1.01 28636.57 111.86 0.00 0.00 4455.11 2761.87 11671.65 00:34:55.967 =================================================================================================================== 00:34:55.967 Total : 57311.92 223.87 0.00 0.00 4455.10 2371.78 11671.65 00:34:56.918 00:34:56.918 real 0m3.262s 00:34:56.918 user 0m2.929s 00:34:56.918 sys 0m0.233s 00:34:56.918 ************************************ 00:34:56.918 END TEST bdev_write_zeroes 00:34:56.918 ************************************ 00:34:56.918 00:48:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:56.918 00:48:50 -- common/autotest_common.sh@10 -- # set +x 00:34:57.175 00:48:50 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:57.175 00:48:50 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:34:57.175 00:48:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:57.175 00:48:50 -- common/autotest_common.sh@10 -- # set +x 00:34:57.175 ************************************ 00:34:57.175 START TEST bdev_json_nonenclosed 00:34:57.175 ************************************ 00:34:57.175 00:48:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:57.175 [2024-04-24 00:48:50.879484] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:57.175 [2024-04-24 00:48:50.880174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148674 ] 00:34:57.434 [2024-04-24 00:48:51.060395] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.693 [2024-04-24 00:48:51.299154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.693 [2024-04-24 00:48:51.299263] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:57.693 [2024-04-24 00:48:51.299297] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:57.693 [2024-04-24 00:48:51.299320] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:57.976 00:34:57.976 real 0m0.942s 00:34:57.976 user 0m0.709s 00:34:57.976 sys 0m0.133s 00:34:57.976 00:48:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:57.976 00:48:51 -- common/autotest_common.sh@10 -- # set +x 00:34:57.976 ************************************ 00:34:57.976 END TEST bdev_json_nonenclosed 00:34:57.976 ************************************ 00:34:58.235 00:48:51 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:58.235 00:48:51 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:34:58.235 00:48:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:58.235 00:48:51 -- common/autotest_common.sh@10 -- # set +x 00:34:58.235 ************************************ 00:34:58.235 START TEST bdev_json_nonarray 00:34:58.235 ************************************ 00:34:58.235 00:48:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:58.235 [2024-04-24 00:48:51.884711] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:58.235 [2024-04-24 00:48:51.885434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148717 ] 00:34:58.494 [2024-04-24 00:48:52.045632] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.494 [2024-04-24 00:48:52.258630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.494 [2024-04-24 00:48:52.258966] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:58.494 [2024-04-24 00:48:52.259129] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:58.494 [2024-04-24 00:48:52.259192] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:59.062 00:34:59.062 real 0m0.881s 00:34:59.062 user 0m0.628s 00:34:59.062 sys 0m0.153s 00:34:59.062 00:48:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:59.062 00:48:52 -- common/autotest_common.sh@10 -- # set +x 00:34:59.062 ************************************ 00:34:59.062 END TEST bdev_json_nonarray 00:34:59.062 ************************************ 00:34:59.062 00:48:52 -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:34:59.062 00:48:52 -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:34:59.062 00:48:52 -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:34:59.062 00:48:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:34:59.062 00:48:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:59.062 00:48:52 -- common/autotest_common.sh@10 -- # set +x 00:34:59.062 ************************************ 00:34:59.062 START TEST bdev_gpt_uuid 00:34:59.062 ************************************ 00:34:59.062 00:48:52 -- common/autotest_common.sh@1111 -- # bdev_gpt_uuid 00:34:59.062 00:48:52 -- bdev/blockdev.sh@614 -- # local bdev 00:34:59.062 00:48:52 -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:34:59.062 00:48:52 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=148753 00:34:59.062 00:48:52 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:59.062 00:48:52 -- bdev/blockdev.sh@49 -- # waitforlisten 148753 00:34:59.062 00:48:52 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:34:59.062 00:48:52 -- common/autotest_common.sh@817 -- # '[' -z 148753 ']' 00:34:59.062 00:48:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.062 00:48:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:59.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.062 00:48:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.062 00:48:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:59.062 00:48:52 -- common/autotest_common.sh@10 -- # set +x 00:34:59.321 [2024-04-24 00:48:52.904786] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:34:59.321 [2024-04-24 00:48:52.905363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148753 ] 00:34:59.321 [2024-04-24 00:48:53.082229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.581 [2024-04-24 00:48:53.300476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:00.515 00:48:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:00.515 00:48:54 -- common/autotest_common.sh@850 -- # return 0 00:35:00.515 00:48:54 -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:00.515 00:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:00.515 00:48:54 -- common/autotest_common.sh@10 -- # set +x 00:35:00.774 Some configs were skipped because the RPC state that can call them passed over. 00:35:00.774 00:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:00.774 00:48:54 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:35:00.774 00:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:00.774 00:48:54 -- common/autotest_common.sh@10 -- # set +x 00:35:00.774 00:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:00.774 00:48:54 -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:35:00.774 00:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:00.774 00:48:54 -- common/autotest_common.sh@10 -- # set +x 00:35:00.774 00:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:00.774 00:48:54 -- bdev/blockdev.sh@621 -- # bdev='[ 00:35:00.774 { 00:35:00.774 "name": "Nvme0n1p1", 00:35:00.774 "aliases": [ 00:35:00.774 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:35:00.774 ], 00:35:00.774 "product_name": "GPT Disk", 00:35:00.774 "block_size": 4096, 00:35:00.774 "num_blocks": 655104, 00:35:00.774 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:35:00.774 "assigned_rate_limits": { 00:35:00.774 "rw_ios_per_sec": 0, 00:35:00.774 "rw_mbytes_per_sec": 0, 00:35:00.774 "r_mbytes_per_sec": 0, 00:35:00.774 "w_mbytes_per_sec": 0 00:35:00.774 }, 00:35:00.774 "claimed": false, 00:35:00.774 "zoned": false, 00:35:00.774 "supported_io_types": { 00:35:00.774 "read": true, 00:35:00.774 "write": true, 00:35:00.774 "unmap": true, 00:35:00.774 "write_zeroes": true, 00:35:00.774 "flush": true, 00:35:00.774 "reset": true, 00:35:00.774 "compare": true, 00:35:00.774 "compare_and_write": false, 00:35:00.774 "abort": true, 00:35:00.774 "nvme_admin": false, 00:35:00.774 "nvme_io": false 00:35:00.774 }, 00:35:00.774 "driver_specific": { 00:35:00.774 "gpt": { 00:35:00.774 "base_bdev": "Nvme0n1", 00:35:00.774 "offset_blocks": 256, 00:35:00.774 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:35:00.774 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:35:00.774 "partition_name": "SPDK_TEST_first" 00:35:00.774 } 00:35:00.774 } 00:35:00.774 } 00:35:00.774 ]' 00:35:00.774 00:48:54 -- bdev/blockdev.sh@622 -- # jq -r length 00:35:00.774 00:48:54 -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:35:00.774 00:48:54 -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:35:00.774 00:48:54 -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:35:00.774 00:48:54 -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:35:00.774 00:48:54 -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:35:00.774 00:48:54 -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:35:00.774 00:48:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:00.774 00:48:54 -- common/autotest_common.sh@10 -- # set +x 00:35:00.774 00:48:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:00.774 00:48:54 -- bdev/blockdev.sh@626 -- # bdev='[ 00:35:00.774 { 00:35:00.774 "name": "Nvme0n1p2", 00:35:00.774 "aliases": [ 00:35:00.774 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:35:00.774 ], 00:35:00.774 "product_name": "GPT Disk", 00:35:00.774 "block_size": 4096, 00:35:00.774 "num_blocks": 655103, 00:35:00.774 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:35:00.774 "assigned_rate_limits": { 00:35:00.774 "rw_ios_per_sec": 0, 00:35:00.774 "rw_mbytes_per_sec": 0, 00:35:00.774 "r_mbytes_per_sec": 0, 00:35:00.774 "w_mbytes_per_sec": 0 00:35:00.774 }, 00:35:00.774 "claimed": false, 00:35:00.774 "zoned": false, 00:35:00.774 "supported_io_types": { 00:35:00.774 "read": true, 00:35:00.774 "write": true, 00:35:00.774 "unmap": true, 00:35:00.774 "write_zeroes": true, 00:35:00.774 "flush": true, 00:35:00.774 "reset": true, 00:35:00.774 "compare": true, 00:35:00.774 "compare_and_write": false, 00:35:00.774 "abort": true, 00:35:00.774 "nvme_admin": false, 00:35:00.774 "nvme_io": false 00:35:00.774 }, 00:35:00.774 "driver_specific": { 00:35:00.774 "gpt": { 00:35:00.774 "base_bdev": "Nvme0n1", 00:35:00.774 "offset_blocks": 655360, 00:35:00.774 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:35:00.774 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:35:00.774 "partition_name": "SPDK_TEST_second" 00:35:00.774 } 00:35:00.774 } 00:35:00.774 } 00:35:00.774 ]' 00:35:00.774 00:48:54 -- bdev/blockdev.sh@627 -- # jq -r length 00:35:00.774 00:48:54 -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:35:00.774 00:48:54 -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:35:01.033 00:48:54 -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:35:01.033 00:48:54 -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:35:01.033 00:48:54 -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:35:01.033 00:48:54 -- bdev/blockdev.sh@631 -- # killprocess 148753 00:35:01.033 00:48:54 -- common/autotest_common.sh@936 -- # '[' -z 148753 ']' 00:35:01.033 00:48:54 -- common/autotest_common.sh@940 -- # kill -0 148753 00:35:01.033 00:48:54 -- common/autotest_common.sh@941 -- # uname 00:35:01.033 00:48:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:01.033 00:48:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148753 00:35:01.033 00:48:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:01.033 00:48:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:01.033 00:48:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148753' 00:35:01.033 killing process with pid 148753 00:35:01.033 00:48:54 -- common/autotest_common.sh@955 -- # kill 148753 00:35:01.033 00:48:54 -- common/autotest_common.sh@960 -- # wait 148753 00:35:03.601 ************************************ 00:35:03.601 END TEST bdev_gpt_uuid 00:35:03.601 ************************************ 00:35:03.601 00:35:03.601 real 0m4.413s 00:35:03.601 user 0m4.553s 00:35:03.601 sys 0m0.543s 00:35:03.602 00:48:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:03.602 00:48:57 -- common/autotest_common.sh@10 -- # set +x 00:35:03.602 00:48:57 -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:35:03.602 00:48:57 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:35:03.602 00:48:57 -- bdev/blockdev.sh@811 -- # cleanup 00:35:03.602 00:48:57 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:35:03.602 00:48:57 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:35:03.602 00:48:57 -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:35:03.602 00:48:57 -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:35:03.602 00:48:57 -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:35:03.602 00:48:57 -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:03.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:35:04.118 Waiting for block devices as requested 00:35:04.118 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:04.118 00:48:57 -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:35:04.118 00:48:57 -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:35:04.118 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:35:04.118 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:35:04.118 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:35:04.118 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:35:04.118 00:48:57 -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:35:04.118 ************************************ 00:35:04.118 END TEST blockdev_nvme_gpt 00:35:04.118 ************************************ 00:35:04.118 00:35:04.118 real 0m49.079s 00:35:04.118 user 1m7.820s 00:35:04.118 sys 0m7.196s 00:35:04.118 00:48:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:04.118 00:48:57 -- common/autotest_common.sh@10 -- # set +x 00:35:04.376 00:48:57 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:35:04.376 00:48:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:04.376 00:48:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:04.376 00:48:57 -- common/autotest_common.sh@10 -- # set +x 00:35:04.376 ************************************ 00:35:04.376 START TEST nvme 00:35:04.376 ************************************ 00:35:04.376 00:48:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:35:04.376 * Looking for test storage... 00:35:04.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:35:04.376 00:48:58 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:04.941 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:35:04.941 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:05.876 00:48:59 -- nvme/nvme.sh@79 -- # uname 00:35:05.876 00:48:59 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:35:05.876 00:48:59 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:35:05.876 00:48:59 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:35:05.876 00:48:59 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:35:05.876 00:48:59 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:35:05.876 00:48:59 -- common/autotest_common.sh@1055 -- # echo 0 00:35:05.876 Waiting for stub to ready for secondary processes... 00:35:05.876 00:48:59 -- common/autotest_common.sh@1057 -- # stubpid=149169 00:35:05.876 00:48:59 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:35:05.876 00:48:59 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:35:05.876 00:48:59 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:35:05.876 00:48:59 -- common/autotest_common.sh@1061 -- # [[ -e /proc/149169 ]] 00:35:05.876 00:48:59 -- common/autotest_common.sh@1062 -- # sleep 1s 00:35:06.134 [2024-04-24 00:48:59.701044] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:35:06.134 [2024-04-24 00:48:59.701459] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:35:07.069 00:49:00 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:35:07.069 00:49:00 -- common/autotest_common.sh@1061 -- # [[ -e /proc/149169 ]] 00:35:07.069 00:49:00 -- common/autotest_common.sh@1062 -- # sleep 1s 00:35:07.069 [2024-04-24 00:49:00.788474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:07.327 [2024-04-24 00:49:01.030663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:07.327 [2024-04-24 00:49:01.030823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.327 [2024-04-24 00:49:01.030831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:07.327 [2024-04-24 00:49:01.041298] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:35:07.327 [2024-04-24 00:49:01.041508] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:35:07.327 [2024-04-24 00:49:01.052850] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:35:07.327 [2024-04-24 00:49:01.053421] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:35:07.893 done. 00:35:07.893 00:49:01 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:35:07.893 00:49:01 -- common/autotest_common.sh@1064 -- # echo done. 00:35:07.893 00:49:01 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:35:07.893 00:49:01 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:35:07.893 00:49:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:07.893 00:49:01 -- common/autotest_common.sh@10 -- # set +x 00:35:08.151 ************************************ 00:35:08.151 START TEST nvme_reset 00:35:08.151 ************************************ 00:35:08.151 00:49:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:35:08.410 Initializing NVMe Controllers 00:35:08.410 Skipping QEMU NVMe SSD at 0000:00:10.0 00:35:08.410 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:35:08.410 ************************************ 00:35:08.410 END TEST nvme_reset 00:35:08.410 ************************************ 00:35:08.410 00:35:08.410 real 0m0.349s 00:35:08.410 user 0m0.091s 00:35:08.410 sys 0m0.195s 00:35:08.410 00:49:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:08.410 00:49:02 -- common/autotest_common.sh@10 -- # set +x 00:35:08.410 00:49:02 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:35:08.410 00:49:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:08.410 00:49:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:08.410 00:49:02 -- common/autotest_common.sh@10 -- # set +x 00:35:08.410 ************************************ 00:35:08.410 START TEST nvme_identify 00:35:08.410 ************************************ 00:35:08.410 00:49:02 -- common/autotest_common.sh@1111 -- # nvme_identify 00:35:08.410 00:49:02 -- nvme/nvme.sh@12 -- # bdfs=() 00:35:08.410 00:49:02 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:35:08.410 00:49:02 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:35:08.410 00:49:02 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:35:08.410 00:49:02 -- common/autotest_common.sh@1499 -- # bdfs=() 00:35:08.410 00:49:02 -- common/autotest_common.sh@1499 -- # local bdfs 00:35:08.410 00:49:02 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:08.410 00:49:02 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:35:08.410 00:49:02 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:08.668 00:49:02 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:35:08.668 00:49:02 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:35:08.668 00:49:02 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:35:08.928 [2024-04-24 00:49:02.507131] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 149212 terminated unexpected 00:35:08.928 ===================================================== 00:35:08.928 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:08.928 ===================================================== 00:35:08.928 Controller Capabilities/Features 00:35:08.928 ================================ 00:35:08.928 Vendor ID: 1b36 00:35:08.928 Subsystem Vendor ID: 1af4 00:35:08.928 Serial Number: 12340 00:35:08.928 Model Number: QEMU NVMe Ctrl 00:35:08.928 Firmware Version: 8.0.0 00:35:08.928 Recommended Arb Burst: 6 00:35:08.928 IEEE OUI Identifier: 00 54 52 00:35:08.928 Multi-path I/O 00:35:08.928 May have multiple subsystem ports: No 00:35:08.928 May have multiple controllers: No 00:35:08.928 Associated with SR-IOV VF: No 00:35:08.928 Max Data Transfer Size: 524288 00:35:08.928 Max Number of Namespaces: 256 00:35:08.928 Max Number of I/O Queues: 64 00:35:08.928 NVMe Specification Version (VS): 1.4 00:35:08.928 NVMe Specification Version (Identify): 1.4 00:35:08.928 Maximum Queue Entries: 2048 00:35:08.928 Contiguous Queues Required: Yes 00:35:08.928 Arbitration Mechanisms Supported 00:35:08.928 Weighted Round Robin: Not Supported 00:35:08.928 Vendor Specific: Not Supported 00:35:08.928 Reset Timeout: 7500 ms 00:35:08.928 Doorbell Stride: 4 bytes 00:35:08.928 NVM Subsystem Reset: Not Supported 00:35:08.928 Command Sets Supported 00:35:08.928 NVM Command Set: Supported 00:35:08.928 Boot Partition: Not Supported 00:35:08.928 Memory Page Size Minimum: 4096 bytes 00:35:08.928 Memory Page Size Maximum: 65536 bytes 00:35:08.928 Persistent Memory Region: Not Supported 00:35:08.928 Optional Asynchronous Events Supported 00:35:08.928 Namespace Attribute Notices: Supported 00:35:08.928 Firmware Activation Notices: Not Supported 00:35:08.928 ANA Change Notices: Not Supported 00:35:08.928 PLE Aggregate Log Change Notices: Not Supported 00:35:08.928 LBA Status Info Alert Notices: Not Supported 00:35:08.928 EGE Aggregate Log Change Notices: Not Supported 00:35:08.928 Normal NVM Subsystem Shutdown event: Not Supported 00:35:08.928 Zone Descriptor Change Notices: Not Supported 00:35:08.928 Discovery Log Change Notices: Not Supported 00:35:08.928 Controller Attributes 00:35:08.928 128-bit Host Identifier: Not Supported 00:35:08.928 Non-Operational Permissive Mode: Not Supported 00:35:08.928 NVM Sets: Not Supported 00:35:08.928 Read Recovery Levels: Not Supported 00:35:08.928 Endurance Groups: Not Supported 00:35:08.928 Predictable Latency Mode: Not Supported 00:35:08.928 Traffic Based Keep ALive: Not Supported 00:35:08.928 Namespace Granularity: Not Supported 00:35:08.928 SQ Associations: Not Supported 00:35:08.928 UUID List: Not Supported 00:35:08.928 Multi-Domain Subsystem: Not Supported 00:35:08.928 Fixed Capacity Management: Not Supported 00:35:08.928 Variable Capacity Management: Not Supported 00:35:08.928 Delete Endurance Group: Not Supported 00:35:08.928 Delete NVM Set: Not Supported 00:35:08.928 Extended LBA Formats Supported: Supported 00:35:08.928 Flexible Data Placement Supported: Not Supported 00:35:08.928 00:35:08.928 Controller Memory Buffer Support 00:35:08.928 ================================ 00:35:08.928 Supported: No 00:35:08.928 00:35:08.928 Persistent Memory Region Support 00:35:08.928 ================================ 00:35:08.928 Supported: No 00:35:08.928 00:35:08.928 Admin Command Set Attributes 00:35:08.928 ============================ 00:35:08.928 Security Send/Receive: Not Supported 00:35:08.928 Format NVM: Supported 00:35:08.928 Firmware Activate/Download: Not Supported 00:35:08.928 Namespace Management: Supported 00:35:08.928 Device Self-Test: Not Supported 00:35:08.928 Directives: Supported 00:35:08.928 NVMe-MI: Not Supported 00:35:08.928 Virtualization Management: Not Supported 00:35:08.928 Doorbell Buffer Config: Supported 00:35:08.928 Get LBA Status Capability: Not Supported 00:35:08.928 Command & Feature Lockdown Capability: Not Supported 00:35:08.928 Abort Command Limit: 4 00:35:08.928 Async Event Request Limit: 4 00:35:08.928 Number of Firmware Slots: N/A 00:35:08.928 Firmware Slot 1 Read-Only: N/A 00:35:08.928 Firmware Activation Without Reset: N/A 00:35:08.928 Multiple Update Detection Support: N/A 00:35:08.928 Firmware Update Granularity: No Information Provided 00:35:08.928 Per-Namespace SMART Log: Yes 00:35:08.928 Asymmetric Namespace Access Log Page: Not Supported 00:35:08.928 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:35:08.928 Command Effects Log Page: Supported 00:35:08.928 Get Log Page Extended Data: Supported 00:35:08.928 Telemetry Log Pages: Not Supported 00:35:08.928 Persistent Event Log Pages: Not Supported 00:35:08.928 Supported Log Pages Log Page: May Support 00:35:08.928 Commands Supported & Effects Log Page: Not Supported 00:35:08.928 Feature Identifiers & Effects Log Page:May Support 00:35:08.928 NVMe-MI Commands & Effects Log Page: May Support 00:35:08.928 Data Area 4 for Telemetry Log: Not Supported 00:35:08.928 Error Log Page Entries Supported: 1 00:35:08.928 Keep Alive: Not Supported 00:35:08.928 00:35:08.928 NVM Command Set Attributes 00:35:08.928 ========================== 00:35:08.928 Submission Queue Entry Size 00:35:08.928 Max: 64 00:35:08.928 Min: 64 00:35:08.928 Completion Queue Entry Size 00:35:08.928 Max: 16 00:35:08.928 Min: 16 00:35:08.928 Number of Namespaces: 256 00:35:08.928 Compare Command: Supported 00:35:08.928 Write Uncorrectable Command: Not Supported 00:35:08.928 Dataset Management Command: Supported 00:35:08.928 Write Zeroes Command: Supported 00:35:08.928 Set Features Save Field: Supported 00:35:08.928 Reservations: Not Supported 00:35:08.928 Timestamp: Supported 00:35:08.928 Copy: Supported 00:35:08.928 Volatile Write Cache: Present 00:35:08.928 Atomic Write Unit (Normal): 1 00:35:08.928 Atomic Write Unit (PFail): 1 00:35:08.928 Atomic Compare & Write Unit: 1 00:35:08.928 Fused Compare & Write: Not Supported 00:35:08.928 Scatter-Gather List 00:35:08.928 SGL Command Set: Supported 00:35:08.928 SGL Keyed: Not Supported 00:35:08.928 SGL Bit Bucket Descriptor: Not Supported 00:35:08.928 SGL Metadata Pointer: Not Supported 00:35:08.928 Oversized SGL: Not Supported 00:35:08.928 SGL Metadata Address: Not Supported 00:35:08.928 SGL Offset: Not Supported 00:35:08.928 Transport SGL Data Block: Not Supported 00:35:08.928 Replay Protected Memory Block: Not Supported 00:35:08.928 00:35:08.928 Firmware Slot Information 00:35:08.928 ========================= 00:35:08.928 Active slot: 1 00:35:08.928 Slot 1 Firmware Revision: 1.0 00:35:08.928 00:35:08.928 00:35:08.928 Commands Supported and Effects 00:35:08.928 ============================== 00:35:08.928 Admin Commands 00:35:08.928 -------------- 00:35:08.928 Delete I/O Submission Queue (00h): Supported 00:35:08.928 Create I/O Submission Queue (01h): Supported 00:35:08.928 Get Log Page (02h): Supported 00:35:08.928 Delete I/O Completion Queue (04h): Supported 00:35:08.928 Create I/O Completion Queue (05h): Supported 00:35:08.928 Identify (06h): Supported 00:35:08.928 Abort (08h): Supported 00:35:08.928 Set Features (09h): Supported 00:35:08.928 Get Features (0Ah): Supported 00:35:08.928 Asynchronous Event Request (0Ch): Supported 00:35:08.928 Namespace Attachment (15h): Supported NS-Inventory-Change 00:35:08.928 Directive Send (19h): Supported 00:35:08.928 Directive Receive (1Ah): Supported 00:35:08.928 Virtualization Management (1Ch): Supported 00:35:08.928 Doorbell Buffer Config (7Ch): Supported 00:35:08.928 Format NVM (80h): Supported LBA-Change 00:35:08.928 I/O Commands 00:35:08.928 ------------ 00:35:08.928 Flush (00h): Supported LBA-Change 00:35:08.928 Write (01h): Supported LBA-Change 00:35:08.928 Read (02h): Supported 00:35:08.928 Compare (05h): Supported 00:35:08.928 Write Zeroes (08h): Supported LBA-Change 00:35:08.928 Dataset Management (09h): Supported LBA-Change 00:35:08.928 Unknown (0Ch): Supported 00:35:08.928 Unknown (12h): Supported 00:35:08.928 Copy (19h): Supported LBA-Change 00:35:08.928 Unknown (1Dh): Supported LBA-Change 00:35:08.928 00:35:08.928 Error Log 00:35:08.928 ========= 00:35:08.928 00:35:08.928 Arbitration 00:35:08.928 =========== 00:35:08.928 Arbitration Burst: no limit 00:35:08.928 00:35:08.928 Power Management 00:35:08.928 ================ 00:35:08.929 Number of Power States: 1 00:35:08.929 Current Power State: Power State #0 00:35:08.929 Power State #0: 00:35:08.929 Max Power: 25.00 W 00:35:08.929 Non-Operational State: Operational 00:35:08.929 Entry Latency: 16 microseconds 00:35:08.929 Exit Latency: 4 microseconds 00:35:08.929 Relative Read Throughput: 0 00:35:08.929 Relative Read Latency: 0 00:35:08.929 Relative Write Throughput: 0 00:35:08.929 Relative Write Latency: 0 00:35:08.929 Idle Power: Not Reported 00:35:08.929 Active Power: Not Reported 00:35:08.929 Non-Operational Permissive Mode: Not Supported 00:35:08.929 00:35:08.929 Health Information 00:35:08.929 ================== 00:35:08.929 Critical Warnings: 00:35:08.929 Available Spare Space: OK 00:35:08.929 Temperature: OK 00:35:08.929 Device Reliability: OK 00:35:08.929 Read Only: No 00:35:08.929 Volatile Memory Backup: OK 00:35:08.929 Current Temperature: 323 Kelvin (50 Celsius) 00:35:08.929 Temperature Threshold: 343 Kelvin (70 Celsius) 00:35:08.929 Available Spare: 0% 00:35:08.929 Available Spare Threshold: 0% 00:35:08.929 Life Percentage Used: 0% 00:35:08.929 Data Units Read: 3668 00:35:08.929 Data Units Written: 3325 00:35:08.929 Host Read Commands: 185236 00:35:08.929 Host Write Commands: 198200 00:35:08.929 Controller Busy Time: 0 minutes 00:35:08.929 Power Cycles: 0 00:35:08.929 Power On Hours: 0 hours 00:35:08.929 Unsafe Shutdowns: 0 00:35:08.929 Unrecoverable Media Errors: 0 00:35:08.929 Lifetime Error Log Entries: 0 00:35:08.929 Warning Temperature Time: 0 minutes 00:35:08.929 Critical Temperature Time: 0 minutes 00:35:08.929 00:35:08.929 Number of Queues 00:35:08.929 ================ 00:35:08.929 Number of I/O Submission Queues: 64 00:35:08.929 Number of I/O Completion Queues: 64 00:35:08.929 00:35:08.929 ZNS Specific Controller Data 00:35:08.929 ============================ 00:35:08.929 Zone Append Size Limit: 0 00:35:08.929 00:35:08.929 00:35:08.929 Active Namespaces 00:35:08.929 ================= 00:35:08.929 Namespace ID:1 00:35:08.929 Error Recovery Timeout: Unlimited 00:35:08.929 Command Set Identifier: NVM (00h) 00:35:08.929 Deallocate: Supported 00:35:08.929 Deallocated/Unwritten Error: Supported 00:35:08.929 Deallocated Read Value: All 0x00 00:35:08.929 Deallocate in Write Zeroes: Not Supported 00:35:08.929 Deallocated Guard Field: 0xFFFF 00:35:08.929 Flush: Supported 00:35:08.929 Reservation: Not Supported 00:35:08.929 Namespace Sharing Capabilities: Private 00:35:08.929 Size (in LBAs): 1310720 (5GiB) 00:35:08.929 Capacity (in LBAs): 1310720 (5GiB) 00:35:08.929 Utilization (in LBAs): 1310720 (5GiB) 00:35:08.929 Thin Provisioning: Not Supported 00:35:08.929 Per-NS Atomic Units: No 00:35:08.929 Maximum Single Source Range Length: 128 00:35:08.929 Maximum Copy Length: 128 00:35:08.929 Maximum Source Range Count: 128 00:35:08.929 NGUID/EUI64 Never Reused: No 00:35:08.929 Namespace Write Protected: No 00:35:08.929 Number of LBA Formats: 8 00:35:08.929 Current LBA Format: LBA Format #04 00:35:08.929 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:08.929 LBA Format #01: Data Size: 512 Metadata Size: 8 00:35:08.929 LBA Format #02: Data Size: 512 Metadata Size: 16 00:35:08.929 LBA Format #03: Data Size: 512 Metadata Size: 64 00:35:08.929 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:35:08.929 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:35:08.929 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:35:08.929 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:35:08.929 00:35:08.929 00:49:02 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:35:08.929 00:49:02 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:35:09.187 ===================================================== 00:35:09.187 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:09.187 ===================================================== 00:35:09.187 Controller Capabilities/Features 00:35:09.187 ================================ 00:35:09.187 Vendor ID: 1b36 00:35:09.187 Subsystem Vendor ID: 1af4 00:35:09.187 Serial Number: 12340 00:35:09.187 Model Number: QEMU NVMe Ctrl 00:35:09.187 Firmware Version: 8.0.0 00:35:09.187 Recommended Arb Burst: 6 00:35:09.187 IEEE OUI Identifier: 00 54 52 00:35:09.187 Multi-path I/O 00:35:09.187 May have multiple subsystem ports: No 00:35:09.187 May have multiple controllers: No 00:35:09.187 Associated with SR-IOV VF: No 00:35:09.187 Max Data Transfer Size: 524288 00:35:09.187 Max Number of Namespaces: 256 00:35:09.187 Max Number of I/O Queues: 64 00:35:09.187 NVMe Specification Version (VS): 1.4 00:35:09.187 NVMe Specification Version (Identify): 1.4 00:35:09.187 Maximum Queue Entries: 2048 00:35:09.187 Contiguous Queues Required: Yes 00:35:09.187 Arbitration Mechanisms Supported 00:35:09.187 Weighted Round Robin: Not Supported 00:35:09.187 Vendor Specific: Not Supported 00:35:09.187 Reset Timeout: 7500 ms 00:35:09.187 Doorbell Stride: 4 bytes 00:35:09.187 NVM Subsystem Reset: Not Supported 00:35:09.187 Command Sets Supported 00:35:09.187 NVM Command Set: Supported 00:35:09.187 Boot Partition: Not Supported 00:35:09.187 Memory Page Size Minimum: 4096 bytes 00:35:09.187 Memory Page Size Maximum: 65536 bytes 00:35:09.187 Persistent Memory Region: Not Supported 00:35:09.187 Optional Asynchronous Events Supported 00:35:09.187 Namespace Attribute Notices: Supported 00:35:09.187 Firmware Activation Notices: Not Supported 00:35:09.187 ANA Change Notices: Not Supported 00:35:09.187 PLE Aggregate Log Change Notices: Not Supported 00:35:09.187 LBA Status Info Alert Notices: Not Supported 00:35:09.187 EGE Aggregate Log Change Notices: Not Supported 00:35:09.187 Normal NVM Subsystem Shutdown event: Not Supported 00:35:09.187 Zone Descriptor Change Notices: Not Supported 00:35:09.187 Discovery Log Change Notices: Not Supported 00:35:09.187 Controller Attributes 00:35:09.187 128-bit Host Identifier: Not Supported 00:35:09.187 Non-Operational Permissive Mode: Not Supported 00:35:09.187 NVM Sets: Not Supported 00:35:09.187 Read Recovery Levels: Not Supported 00:35:09.187 Endurance Groups: Not Supported 00:35:09.187 Predictable Latency Mode: Not Supported 00:35:09.187 Traffic Based Keep ALive: Not Supported 00:35:09.187 Namespace Granularity: Not Supported 00:35:09.187 SQ Associations: Not Supported 00:35:09.187 UUID List: Not Supported 00:35:09.187 Multi-Domain Subsystem: Not Supported 00:35:09.187 Fixed Capacity Management: Not Supported 00:35:09.187 Variable Capacity Management: Not Supported 00:35:09.187 Delete Endurance Group: Not Supported 00:35:09.187 Delete NVM Set: Not Supported 00:35:09.187 Extended LBA Formats Supported: Supported 00:35:09.187 Flexible Data Placement Supported: Not Supported 00:35:09.187 00:35:09.187 Controller Memory Buffer Support 00:35:09.187 ================================ 00:35:09.187 Supported: No 00:35:09.187 00:35:09.187 Persistent Memory Region Support 00:35:09.187 ================================ 00:35:09.187 Supported: No 00:35:09.187 00:35:09.187 Admin Command Set Attributes 00:35:09.187 ============================ 00:35:09.187 Security Send/Receive: Not Supported 00:35:09.187 Format NVM: Supported 00:35:09.187 Firmware Activate/Download: Not Supported 00:35:09.187 Namespace Management: Supported 00:35:09.187 Device Self-Test: Not Supported 00:35:09.187 Directives: Supported 00:35:09.187 NVMe-MI: Not Supported 00:35:09.187 Virtualization Management: Not Supported 00:35:09.187 Doorbell Buffer Config: Supported 00:35:09.187 Get LBA Status Capability: Not Supported 00:35:09.187 Command & Feature Lockdown Capability: Not Supported 00:35:09.187 Abort Command Limit: 4 00:35:09.187 Async Event Request Limit: 4 00:35:09.187 Number of Firmware Slots: N/A 00:35:09.187 Firmware Slot 1 Read-Only: N/A 00:35:09.187 Firmware Activation Without Reset: N/A 00:35:09.187 Multiple Update Detection Support: N/A 00:35:09.187 Firmware Update Granularity: No Information Provided 00:35:09.187 Per-Namespace SMART Log: Yes 00:35:09.187 Asymmetric Namespace Access Log Page: Not Supported 00:35:09.187 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:35:09.187 Command Effects Log Page: Supported 00:35:09.187 Get Log Page Extended Data: Supported 00:35:09.187 Telemetry Log Pages: Not Supported 00:35:09.187 Persistent Event Log Pages: Not Supported 00:35:09.187 Supported Log Pages Log Page: May Support 00:35:09.187 Commands Supported & Effects Log Page: Not Supported 00:35:09.187 Feature Identifiers & Effects Log Page:May Support 00:35:09.187 NVMe-MI Commands & Effects Log Page: May Support 00:35:09.187 Data Area 4 for Telemetry Log: Not Supported 00:35:09.187 Error Log Page Entries Supported: 1 00:35:09.187 Keep Alive: Not Supported 00:35:09.187 00:35:09.187 NVM Command Set Attributes 00:35:09.187 ========================== 00:35:09.187 Submission Queue Entry Size 00:35:09.187 Max: 64 00:35:09.187 Min: 64 00:35:09.187 Completion Queue Entry Size 00:35:09.187 Max: 16 00:35:09.188 Min: 16 00:35:09.188 Number of Namespaces: 256 00:35:09.188 Compare Command: Supported 00:35:09.188 Write Uncorrectable Command: Not Supported 00:35:09.188 Dataset Management Command: Supported 00:35:09.188 Write Zeroes Command: Supported 00:35:09.188 Set Features Save Field: Supported 00:35:09.188 Reservations: Not Supported 00:35:09.188 Timestamp: Supported 00:35:09.188 Copy: Supported 00:35:09.188 Volatile Write Cache: Present 00:35:09.188 Atomic Write Unit (Normal): 1 00:35:09.188 Atomic Write Unit (PFail): 1 00:35:09.188 Atomic Compare & Write Unit: 1 00:35:09.188 Fused Compare & Write: Not Supported 00:35:09.188 Scatter-Gather List 00:35:09.188 SGL Command Set: Supported 00:35:09.188 SGL Keyed: Not Supported 00:35:09.188 SGL Bit Bucket Descriptor: Not Supported 00:35:09.188 SGL Metadata Pointer: Not Supported 00:35:09.188 Oversized SGL: Not Supported 00:35:09.188 SGL Metadata Address: Not Supported 00:35:09.188 SGL Offset: Not Supported 00:35:09.188 Transport SGL Data Block: Not Supported 00:35:09.188 Replay Protected Memory Block: Not Supported 00:35:09.188 00:35:09.188 Firmware Slot Information 00:35:09.188 ========================= 00:35:09.188 Active slot: 1 00:35:09.188 Slot 1 Firmware Revision: 1.0 00:35:09.188 00:35:09.188 00:35:09.188 Commands Supported and Effects 00:35:09.188 ============================== 00:35:09.188 Admin Commands 00:35:09.188 -------------- 00:35:09.188 Delete I/O Submission Queue (00h): Supported 00:35:09.188 Create I/O Submission Queue (01h): Supported 00:35:09.188 Get Log Page (02h): Supported 00:35:09.188 Delete I/O Completion Queue (04h): Supported 00:35:09.188 Create I/O Completion Queue (05h): Supported 00:35:09.188 Identify (06h): Supported 00:35:09.188 Abort (08h): Supported 00:35:09.188 Set Features (09h): Supported 00:35:09.188 Get Features (0Ah): Supported 00:35:09.188 Asynchronous Event Request (0Ch): Supported 00:35:09.188 Namespace Attachment (15h): Supported NS-Inventory-Change 00:35:09.188 Directive Send (19h): Supported 00:35:09.188 Directive Receive (1Ah): Supported 00:35:09.188 Virtualization Management (1Ch): Supported 00:35:09.188 Doorbell Buffer Config (7Ch): Supported 00:35:09.188 Format NVM (80h): Supported LBA-Change 00:35:09.188 I/O Commands 00:35:09.188 ------------ 00:35:09.188 Flush (00h): Supported LBA-Change 00:35:09.188 Write (01h): Supported LBA-Change 00:35:09.188 Read (02h): Supported 00:35:09.188 Compare (05h): Supported 00:35:09.188 Write Zeroes (08h): Supported LBA-Change 00:35:09.188 Dataset Management (09h): Supported LBA-Change 00:35:09.188 Unknown (0Ch): Supported 00:35:09.188 Unknown (12h): Supported 00:35:09.188 Copy (19h): Supported LBA-Change 00:35:09.188 Unknown (1Dh): Supported LBA-Change 00:35:09.188 00:35:09.188 Error Log 00:35:09.188 ========= 00:35:09.188 00:35:09.188 Arbitration 00:35:09.188 =========== 00:35:09.188 Arbitration Burst: no limit 00:35:09.188 00:35:09.188 Power Management 00:35:09.188 ================ 00:35:09.188 Number of Power States: 1 00:35:09.188 Current Power State: Power State #0 00:35:09.188 Power State #0: 00:35:09.188 Max Power: 25.00 W 00:35:09.188 Non-Operational State: Operational 00:35:09.188 Entry Latency: 16 microseconds 00:35:09.188 Exit Latency: 4 microseconds 00:35:09.188 Relative Read Throughput: 0 00:35:09.188 Relative Read Latency: 0 00:35:09.188 Relative Write Throughput: 0 00:35:09.188 Relative Write Latency: 0 00:35:09.188 Idle Power: Not Reported 00:35:09.188 Active Power: Not Reported 00:35:09.188 Non-Operational Permissive Mode: Not Supported 00:35:09.188 00:35:09.188 Health Information 00:35:09.188 ================== 00:35:09.188 Critical Warnings: 00:35:09.188 Available Spare Space: OK 00:35:09.188 Temperature: OK 00:35:09.188 Device Reliability: OK 00:35:09.188 Read Only: No 00:35:09.188 Volatile Memory Backup: OK 00:35:09.188 Current Temperature: 323 Kelvin (50 Celsius) 00:35:09.188 Temperature Threshold: 343 Kelvin (70 Celsius) 00:35:09.188 Available Spare: 0% 00:35:09.188 Available Spare Threshold: 0% 00:35:09.188 Life Percentage Used: 0% 00:35:09.188 Data Units Read: 3668 00:35:09.188 Data Units Written: 3325 00:35:09.188 Host Read Commands: 185236 00:35:09.188 Host Write Commands: 198200 00:35:09.188 Controller Busy Time: 0 minutes 00:35:09.188 Power Cycles: 0 00:35:09.188 Power On Hours: 0 hours 00:35:09.188 Unsafe Shutdowns: 0 00:35:09.188 Unrecoverable Media Errors: 0 00:35:09.188 Lifetime Error Log Entries: 0 00:35:09.188 Warning Temperature Time: 0 minutes 00:35:09.188 Critical Temperature Time: 0 minutes 00:35:09.188 00:35:09.188 Number of Queues 00:35:09.188 ================ 00:35:09.188 Number of I/O Submission Queues: 64 00:35:09.188 Number of I/O Completion Queues: 64 00:35:09.188 00:35:09.188 ZNS Specific Controller Data 00:35:09.188 ============================ 00:35:09.188 Zone Append Size Limit: 0 00:35:09.188 00:35:09.188 00:35:09.188 Active Namespaces 00:35:09.188 ================= 00:35:09.188 Namespace ID:1 00:35:09.188 Error Recovery Timeout: Unlimited 00:35:09.188 Command Set Identifier: NVM (00h) 00:35:09.188 Deallocate: Supported 00:35:09.188 Deallocated/Unwritten Error: Supported 00:35:09.188 Deallocated Read Value: All 0x00 00:35:09.188 Deallocate in Write Zeroes: Not Supported 00:35:09.188 Deallocated Guard Field: 0xFFFF 00:35:09.188 Flush: Supported 00:35:09.188 Reservation: Not Supported 00:35:09.188 Namespace Sharing Capabilities: Private 00:35:09.188 Size (in LBAs): 1310720 (5GiB) 00:35:09.188 Capacity (in LBAs): 1310720 (5GiB) 00:35:09.188 Utilization (in LBAs): 1310720 (5GiB) 00:35:09.188 Thin Provisioning: Not Supported 00:35:09.188 Per-NS Atomic Units: No 00:35:09.188 Maximum Single Source Range Length: 128 00:35:09.188 Maximum Copy Length: 128 00:35:09.188 Maximum Source Range Count: 128 00:35:09.188 NGUID/EUI64 Never Reused: No 00:35:09.188 Namespace Write Protected: No 00:35:09.188 Number of LBA Formats: 8 00:35:09.188 Current LBA Format: LBA Format #04 00:35:09.188 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:09.188 LBA Format #01: Data Size: 512 Metadata Size: 8 00:35:09.188 LBA Format #02: Data Size: 512 Metadata Size: 16 00:35:09.188 LBA Format #03: Data Size: 512 Metadata Size: 64 00:35:09.188 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:35:09.188 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:35:09.188 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:35:09.188 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:35:09.188 00:35:09.188 ************************************ 00:35:09.188 END TEST nvme_identify 00:35:09.188 ************************************ 00:35:09.188 00:35:09.188 real 0m0.800s 00:35:09.188 user 0m0.307s 00:35:09.188 sys 0m0.391s 00:35:09.188 00:49:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:09.188 00:49:02 -- common/autotest_common.sh@10 -- # set +x 00:35:09.446 00:49:02 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:35:09.446 00:49:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:09.446 00:49:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:09.446 00:49:02 -- common/autotest_common.sh@10 -- # set +x 00:35:09.446 ************************************ 00:35:09.446 START TEST nvme_perf 00:35:09.446 ************************************ 00:35:09.446 00:49:03 -- common/autotest_common.sh@1111 -- # nvme_perf 00:35:09.446 00:49:03 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:35:10.825 Initializing NVMe Controllers 00:35:10.825 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:10.825 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:35:10.825 Initialization complete. Launching workers. 00:35:10.825 ======================================================== 00:35:10.825 Latency(us) 00:35:10.825 Device Information : IOPS MiB/s Average min max 00:35:10.825 PCIE (0000:00:10.0) NSID 1 from core 0: 78395.92 918.70 1630.96 630.18 7555.45 00:35:10.825 ======================================================== 00:35:10.825 Total : 78395.92 918.70 1630.96 630.18 7555.45 00:35:10.825 00:35:10.825 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:35:10.825 ================================================================================= 00:35:10.825 1.00000% : 885.516us 00:35:10.825 10.00000% : 1084.465us 00:35:10.825 25.00000% : 1287.314us 00:35:10.825 50.00000% : 1583.787us 00:35:10.825 75.00000% : 1880.259us 00:35:10.825 90.00000% : 2168.930us 00:35:10.825 95.00000% : 2481.006us 00:35:10.825 98.00000% : 2886.705us 00:35:10.825 99.00000% : 3229.989us 00:35:10.825 99.50000% : 3573.272us 00:35:10.825 99.90000% : 5773.410us 00:35:10.825 99.99000% : 7240.168us 00:35:10.825 99.99900% : 7583.451us 00:35:10.825 99.99990% : 7583.451us 00:35:10.825 99.99999% : 7583.451us 00:35:10.825 00:35:10.825 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:35:10.825 ============================================================================== 00:35:10.825 Range in us Cumulative IO count 00:35:10.825 628.053 - 631.954: 0.0026% ( 2) 00:35:10.825 639.756 - 643.657: 0.0038% ( 1) 00:35:10.825 643.657 - 647.558: 0.0051% ( 1) 00:35:10.825 651.459 - 655.360: 0.0102% ( 4) 00:35:10.825 655.360 - 659.261: 0.0128% ( 2) 00:35:10.825 659.261 - 663.162: 0.0166% ( 3) 00:35:10.825 663.162 - 667.063: 0.0179% ( 1) 00:35:10.825 667.063 - 670.964: 0.0191% ( 1) 00:35:10.825 670.964 - 674.865: 0.0242% ( 4) 00:35:10.825 674.865 - 678.766: 0.0281% ( 3) 00:35:10.826 678.766 - 682.667: 0.0306% ( 2) 00:35:10.826 682.667 - 686.568: 0.0383% ( 6) 00:35:10.826 686.568 - 690.469: 0.0408% ( 2) 00:35:10.826 690.469 - 694.370: 0.0523% ( 9) 00:35:10.826 694.370 - 698.270: 0.0561% ( 3) 00:35:10.826 698.270 - 702.171: 0.0638% ( 6) 00:35:10.826 702.171 - 706.072: 0.0676% ( 3) 00:35:10.826 706.072 - 709.973: 0.0727% ( 4) 00:35:10.826 709.973 - 713.874: 0.0778% ( 4) 00:35:10.826 713.874 - 717.775: 0.0880% ( 8) 00:35:10.826 717.775 - 721.676: 0.0982% ( 8) 00:35:10.826 721.676 - 725.577: 0.1020% ( 3) 00:35:10.826 725.577 - 729.478: 0.1148% ( 10) 00:35:10.826 729.478 - 733.379: 0.1237% ( 7) 00:35:10.826 733.379 - 737.280: 0.1390% ( 12) 00:35:10.826 737.280 - 741.181: 0.1454% ( 5) 00:35:10.826 741.181 - 745.082: 0.1594% ( 11) 00:35:10.826 745.082 - 748.983: 0.1645% ( 4) 00:35:10.826 748.983 - 752.884: 0.1824% ( 14) 00:35:10.826 752.884 - 756.785: 0.1939% ( 9) 00:35:10.826 756.785 - 760.686: 0.2092% ( 12) 00:35:10.826 760.686 - 764.587: 0.2232% ( 11) 00:35:10.826 764.587 - 768.488: 0.2385% ( 12) 00:35:10.826 768.488 - 772.389: 0.2577% ( 15) 00:35:10.826 772.389 - 776.290: 0.2755% ( 14) 00:35:10.826 776.290 - 780.190: 0.2895% ( 11) 00:35:10.826 780.190 - 784.091: 0.3036% ( 11) 00:35:10.826 784.091 - 787.992: 0.3214% ( 14) 00:35:10.826 787.992 - 791.893: 0.3406% ( 15) 00:35:10.826 791.893 - 795.794: 0.3559% ( 12) 00:35:10.826 795.794 - 799.695: 0.3750% ( 15) 00:35:10.826 799.695 - 803.596: 0.3929% ( 14) 00:35:10.826 803.596 - 807.497: 0.4082% ( 12) 00:35:10.826 807.497 - 811.398: 0.4260% ( 14) 00:35:10.826 811.398 - 815.299: 0.4515% ( 20) 00:35:10.826 815.299 - 819.200: 0.4770% ( 20) 00:35:10.826 819.200 - 823.101: 0.4962% ( 15) 00:35:10.826 823.101 - 827.002: 0.5166% ( 16) 00:35:10.826 827.002 - 830.903: 0.5434% ( 21) 00:35:10.826 830.903 - 834.804: 0.5714% ( 22) 00:35:10.826 834.804 - 838.705: 0.5982% ( 21) 00:35:10.826 838.705 - 842.606: 0.6199% ( 17) 00:35:10.826 842.606 - 846.507: 0.6531% ( 26) 00:35:10.826 846.507 - 850.408: 0.6786% ( 20) 00:35:10.826 850.408 - 854.309: 0.7117% ( 26) 00:35:10.826 854.309 - 858.210: 0.7436% ( 25) 00:35:10.826 858.210 - 862.110: 0.7640% ( 16) 00:35:10.826 862.110 - 866.011: 0.8061% ( 33) 00:35:10.826 866.011 - 869.912: 0.8457% ( 31) 00:35:10.826 869.912 - 873.813: 0.8954% ( 39) 00:35:10.826 873.813 - 877.714: 0.9286% ( 26) 00:35:10.826 877.714 - 881.615: 0.9783% ( 39) 00:35:10.826 881.615 - 885.516: 1.0217% ( 34) 00:35:10.826 885.516 - 889.417: 1.0676% ( 36) 00:35:10.826 889.417 - 893.318: 1.1212% ( 42) 00:35:10.826 893.318 - 897.219: 1.1952% ( 58) 00:35:10.826 897.219 - 901.120: 1.2526% ( 45) 00:35:10.826 901.120 - 905.021: 1.3240% ( 56) 00:35:10.826 905.021 - 908.922: 1.3903% ( 52) 00:35:10.826 908.922 - 912.823: 1.4847% ( 74) 00:35:10.826 912.823 - 916.724: 1.5727% ( 69) 00:35:10.826 916.724 - 920.625: 1.6760% ( 81) 00:35:10.826 920.625 - 924.526: 1.7653% ( 70) 00:35:10.826 924.526 - 928.427: 1.8584% ( 73) 00:35:10.826 928.427 - 932.328: 1.9745% ( 91) 00:35:10.826 932.328 - 936.229: 2.0944% ( 94) 00:35:10.826 936.229 - 940.130: 2.2360% ( 111) 00:35:10.826 940.130 - 944.030: 2.3533% ( 92) 00:35:10.826 944.030 - 947.931: 2.4987% ( 114) 00:35:10.826 947.931 - 951.832: 2.6327% ( 105) 00:35:10.826 951.832 - 955.733: 2.7844% ( 119) 00:35:10.826 955.733 - 959.634: 2.9222% ( 108) 00:35:10.826 959.634 - 963.535: 3.0867% ( 129) 00:35:10.826 963.535 - 967.436: 3.2679% ( 142) 00:35:10.826 967.436 - 971.337: 3.4235% ( 122) 00:35:10.826 971.337 - 975.238: 3.5842% ( 126) 00:35:10.826 975.238 - 979.139: 3.7959% ( 166) 00:35:10.826 979.139 - 983.040: 4.0038% ( 163) 00:35:10.826 983.040 - 986.941: 4.1977% ( 152) 00:35:10.826 986.941 - 990.842: 4.3903% ( 151) 00:35:10.826 990.842 - 994.743: 4.6020% ( 166) 00:35:10.826 994.743 - 998.644: 4.8099% ( 163) 00:35:10.826 998.644 - 1006.446: 5.2028% ( 308) 00:35:10.826 1006.446 - 1014.248: 5.6429% ( 345) 00:35:10.826 1014.248 - 1022.050: 6.0867% ( 348) 00:35:10.826 1022.050 - 1029.851: 6.5383% ( 354) 00:35:10.826 1029.851 - 1037.653: 7.0268% ( 383) 00:35:10.826 1037.653 - 1045.455: 7.4719% ( 349) 00:35:10.826 1045.455 - 1053.257: 7.9962% ( 411) 00:35:10.826 1053.257 - 1061.059: 8.4809% ( 380) 00:35:10.826 1061.059 - 1068.861: 9.0255% ( 427) 00:35:10.826 1068.861 - 1076.663: 9.5115% ( 381) 00:35:10.826 1076.663 - 1084.465: 10.0383% ( 413) 00:35:10.826 1084.465 - 1092.267: 10.5472% ( 399) 00:35:10.826 1092.267 - 1100.069: 11.0995% ( 433) 00:35:10.826 1100.069 - 1107.870: 11.6199% ( 408) 00:35:10.826 1107.870 - 1115.672: 12.1875% ( 445) 00:35:10.826 1115.672 - 1123.474: 12.7194% ( 417) 00:35:10.826 1123.474 - 1131.276: 13.2895% ( 447) 00:35:10.826 1131.276 - 1139.078: 13.8316% ( 425) 00:35:10.826 1139.078 - 1146.880: 14.4133% ( 456) 00:35:10.826 1146.880 - 1154.682: 14.9541% ( 424) 00:35:10.826 1154.682 - 1162.484: 15.5548% ( 471) 00:35:10.826 1162.484 - 1170.286: 16.1212% ( 444) 00:35:10.826 1170.286 - 1178.088: 16.6977% ( 452) 00:35:10.826 1178.088 - 1185.890: 17.2908% ( 465) 00:35:10.826 1185.890 - 1193.691: 17.8584% ( 445) 00:35:10.826 1193.691 - 1201.493: 18.4694% ( 479) 00:35:10.826 1201.493 - 1209.295: 19.0599% ( 463) 00:35:10.826 1209.295 - 1217.097: 19.6875% ( 492) 00:35:10.826 1217.097 - 1224.899: 20.2691% ( 456) 00:35:10.826 1224.899 - 1232.701: 20.9133% ( 505) 00:35:10.826 1232.701 - 1240.503: 21.4911% ( 453) 00:35:10.826 1240.503 - 1248.305: 22.1276% ( 499) 00:35:10.826 1248.305 - 1256.107: 22.7449% ( 484) 00:35:10.826 1256.107 - 1263.909: 23.3584% ( 481) 00:35:10.826 1263.909 - 1271.710: 23.9923% ( 497) 00:35:10.826 1271.710 - 1279.512: 24.5727% ( 455) 00:35:10.826 1279.512 - 1287.314: 25.2602% ( 539) 00:35:10.826 1287.314 - 1295.116: 25.8189% ( 438) 00:35:10.826 1295.116 - 1302.918: 26.5255% ( 554) 00:35:10.826 1302.918 - 1310.720: 27.0893% ( 442) 00:35:10.826 1310.720 - 1318.522: 27.7985% ( 556) 00:35:10.826 1318.522 - 1326.324: 28.3839% ( 459) 00:35:10.826 1326.324 - 1334.126: 29.0625% ( 532) 00:35:10.826 1334.126 - 1341.928: 29.6952% ( 496) 00:35:10.826 1341.928 - 1349.730: 30.3431% ( 508) 00:35:10.826 1349.730 - 1357.531: 31.0089% ( 522) 00:35:10.826 1357.531 - 1365.333: 31.6365% ( 492) 00:35:10.826 1365.333 - 1373.135: 32.3010% ( 521) 00:35:10.826 1373.135 - 1380.937: 32.9311% ( 494) 00:35:10.826 1380.937 - 1388.739: 33.5804% ( 509) 00:35:10.826 1388.739 - 1396.541: 34.2117% ( 495) 00:35:10.826 1396.541 - 1404.343: 34.8814% ( 525) 00:35:10.826 1404.343 - 1412.145: 35.5255% ( 505) 00:35:10.826 1412.145 - 1419.947: 36.2194% ( 544) 00:35:10.826 1419.947 - 1427.749: 36.8457% ( 491) 00:35:10.826 1427.749 - 1435.550: 37.5306% ( 537) 00:35:10.826 1435.550 - 1443.352: 38.1467% ( 483) 00:35:10.826 1443.352 - 1451.154: 38.8457% ( 548) 00:35:10.826 1451.154 - 1458.956: 39.4962% ( 510) 00:35:10.827 1458.956 - 1466.758: 40.1582% ( 519) 00:35:10.827 1466.758 - 1474.560: 40.7857% ( 492) 00:35:10.827 1474.560 - 1482.362: 41.4872% ( 550) 00:35:10.827 1482.362 - 1490.164: 42.1186% ( 495) 00:35:10.827 1490.164 - 1497.966: 42.7959% ( 531) 00:35:10.827 1497.966 - 1505.768: 43.4503% ( 513) 00:35:10.827 1505.768 - 1513.570: 44.1148% ( 521) 00:35:10.827 1513.570 - 1521.371: 44.7857% ( 526) 00:35:10.827 1521.371 - 1529.173: 45.4477% ( 519) 00:35:10.827 1529.173 - 1536.975: 46.1276% ( 533) 00:35:10.827 1536.975 - 1544.777: 46.7691% ( 503) 00:35:10.827 1544.777 - 1552.579: 47.4617% ( 543) 00:35:10.827 1552.579 - 1560.381: 48.0969% ( 498) 00:35:10.827 1560.381 - 1568.183: 48.7781% ( 534) 00:35:10.827 1568.183 - 1575.985: 49.4222% ( 505) 00:35:10.827 1575.985 - 1583.787: 50.1173% ( 545) 00:35:10.827 1583.787 - 1591.589: 50.7589% ( 503) 00:35:10.827 1591.589 - 1599.390: 51.4707% ( 558) 00:35:10.827 1599.390 - 1607.192: 52.0931% ( 488) 00:35:10.827 1607.192 - 1614.994: 52.7870% ( 544) 00:35:10.827 1614.994 - 1622.796: 53.4107% ( 489) 00:35:10.827 1622.796 - 1630.598: 54.1339% ( 567) 00:35:10.827 1630.598 - 1638.400: 54.7526% ( 485) 00:35:10.827 1638.400 - 1646.202: 55.4541% ( 550) 00:35:10.827 1646.202 - 1654.004: 56.1046% ( 510) 00:35:10.827 1654.004 - 1661.806: 56.7934% ( 540) 00:35:10.827 1661.806 - 1669.608: 57.4337% ( 502) 00:35:10.827 1669.608 - 1677.410: 58.1416% ( 555) 00:35:10.827 1677.410 - 1685.211: 58.7755% ( 497) 00:35:10.827 1685.211 - 1693.013: 59.4847% ( 556) 00:35:10.827 1693.013 - 1700.815: 60.1084% ( 489) 00:35:10.827 1700.815 - 1708.617: 60.8023% ( 544) 00:35:10.827 1708.617 - 1716.419: 61.4719% ( 525) 00:35:10.827 1716.419 - 1724.221: 62.1505% ( 532) 00:35:10.827 1724.221 - 1732.023: 62.8214% ( 526) 00:35:10.827 1732.023 - 1739.825: 63.4643% ( 504) 00:35:10.827 1739.825 - 1747.627: 64.1607% ( 546) 00:35:10.827 1747.627 - 1755.429: 64.7921% ( 495) 00:35:10.827 1755.429 - 1763.230: 65.4847% ( 543) 00:35:10.827 1763.230 - 1771.032: 66.1199% ( 498) 00:35:10.827 1771.032 - 1778.834: 66.7819% ( 519) 00:35:10.827 1778.834 - 1786.636: 67.4273% ( 506) 00:35:10.827 1786.636 - 1794.438: 68.1033% ( 530) 00:35:10.827 1794.438 - 1802.240: 68.7168% ( 481) 00:35:10.827 1802.240 - 1810.042: 69.4209% ( 552) 00:35:10.827 1810.042 - 1817.844: 70.0089% ( 461) 00:35:10.827 1817.844 - 1825.646: 70.6926% ( 536) 00:35:10.827 1825.646 - 1833.448: 71.3214% ( 493) 00:35:10.827 1833.448 - 1841.250: 71.9770% ( 514) 00:35:10.827 1841.250 - 1849.051: 72.6059% ( 493) 00:35:10.827 1849.051 - 1856.853: 73.2411% ( 498) 00:35:10.827 1856.853 - 1864.655: 73.8814% ( 502) 00:35:10.827 1864.655 - 1872.457: 74.5255% ( 505) 00:35:10.827 1872.457 - 1880.259: 75.1594% ( 497) 00:35:10.827 1880.259 - 1888.061: 75.7844% ( 490) 00:35:10.827 1888.061 - 1895.863: 76.4388% ( 513) 00:35:10.827 1895.863 - 1903.665: 77.0548% ( 483) 00:35:10.827 1903.665 - 1911.467: 77.6952% ( 502) 00:35:10.827 1911.467 - 1919.269: 78.3265% ( 495) 00:35:10.827 1919.269 - 1927.070: 78.9566% ( 494) 00:35:10.827 1927.070 - 1934.872: 79.5740% ( 484) 00:35:10.827 1934.872 - 1942.674: 80.1849% ( 479) 00:35:10.827 1942.674 - 1950.476: 80.7895% ( 474) 00:35:10.827 1950.476 - 1958.278: 81.3750% ( 459) 00:35:10.827 1958.278 - 1966.080: 81.9324% ( 437) 00:35:10.827 1966.080 - 1973.882: 82.5026% ( 447) 00:35:10.827 1973.882 - 1981.684: 82.9872% ( 380) 00:35:10.827 1981.684 - 1989.486: 83.5077% ( 408) 00:35:10.827 1989.486 - 1997.288: 83.9566% ( 352) 00:35:10.827 1997.288 - 2012.891: 84.8316% ( 686) 00:35:10.827 2012.891 - 2028.495: 85.5867% ( 592) 00:35:10.827 2028.495 - 2044.099: 86.2768% ( 541) 00:35:10.827 2044.099 - 2059.703: 86.9133% ( 499) 00:35:10.827 2059.703 - 2075.307: 87.4694% ( 436) 00:35:10.827 2075.307 - 2090.910: 88.0102% ( 424) 00:35:10.827 2090.910 - 2106.514: 88.5115% ( 393) 00:35:10.827 2106.514 - 2122.118: 88.9834% ( 370) 00:35:10.827 2122.118 - 2137.722: 89.4413% ( 359) 00:35:10.827 2137.722 - 2153.326: 89.8750% ( 340) 00:35:10.827 2153.326 - 2168.930: 90.2768% ( 315) 00:35:10.827 2168.930 - 2184.533: 90.6684% ( 307) 00:35:10.827 2184.533 - 2200.137: 91.0497% ( 299) 00:35:10.827 2200.137 - 2215.741: 91.3941% ( 270) 00:35:10.827 2215.741 - 2231.345: 91.7168% ( 253) 00:35:10.827 2231.345 - 2246.949: 92.0102% ( 230) 00:35:10.827 2246.949 - 2262.552: 92.2997% ( 227) 00:35:10.827 2262.552 - 2278.156: 92.5599% ( 204) 00:35:10.827 2278.156 - 2293.760: 92.8240% ( 207) 00:35:10.827 2293.760 - 2309.364: 93.0702% ( 193) 00:35:10.827 2309.364 - 2324.968: 93.3023% ( 182) 00:35:10.827 2324.968 - 2340.571: 93.5204% ( 171) 00:35:10.827 2340.571 - 2356.175: 93.7245% ( 160) 00:35:10.827 2356.175 - 2371.779: 93.9171% ( 151) 00:35:10.827 2371.779 - 2387.383: 94.1071% ( 149) 00:35:10.827 2387.383 - 2402.987: 94.2832% ( 138) 00:35:10.827 2402.987 - 2418.590: 94.4515% ( 132) 00:35:10.827 2418.590 - 2434.194: 94.6033% ( 119) 00:35:10.827 2434.194 - 2449.798: 94.7666% ( 128) 00:35:10.827 2449.798 - 2465.402: 94.9209% ( 121) 00:35:10.827 2465.402 - 2481.006: 95.0689% ( 116) 00:35:10.827 2481.006 - 2496.610: 95.2181% ( 117) 00:35:10.827 2496.610 - 2512.213: 95.3661% ( 116) 00:35:10.827 2512.213 - 2527.817: 95.5102% ( 113) 00:35:10.827 2527.817 - 2543.421: 95.6582% ( 116) 00:35:10.827 2543.421 - 2559.025: 95.8023% ( 113) 00:35:10.827 2559.025 - 2574.629: 95.9375% ( 106) 00:35:10.827 2574.629 - 2590.232: 96.0740% ( 107) 00:35:10.827 2590.232 - 2605.836: 96.2143% ( 110) 00:35:10.827 2605.836 - 2621.440: 96.3520% ( 108) 00:35:10.827 2621.440 - 2637.044: 96.4872% ( 106) 00:35:10.827 2637.044 - 2652.648: 96.6122% ( 98) 00:35:10.827 2652.648 - 2668.251: 96.7398% ( 100) 00:35:10.827 2668.251 - 2683.855: 96.8622% ( 96) 00:35:10.827 2683.855 - 2699.459: 96.9796% ( 92) 00:35:10.827 2699.459 - 2715.063: 97.0867% ( 84) 00:35:10.827 2715.063 - 2730.667: 97.1875% ( 79) 00:35:10.827 2730.667 - 2746.270: 97.2819% ( 74) 00:35:10.827 2746.270 - 2761.874: 97.3686% ( 68) 00:35:10.827 2761.874 - 2777.478: 97.4579% ( 70) 00:35:10.827 2777.478 - 2793.082: 97.5434% ( 67) 00:35:10.827 2793.082 - 2808.686: 97.6288% ( 67) 00:35:10.827 2808.686 - 2824.290: 97.7041% ( 59) 00:35:10.827 2824.290 - 2839.893: 97.7908% ( 68) 00:35:10.827 2839.893 - 2855.497: 97.8699% ( 62) 00:35:10.827 2855.497 - 2871.101: 97.9413% ( 56) 00:35:10.827 2871.101 - 2886.705: 98.0166% ( 59) 00:35:10.827 2886.705 - 2902.309: 98.0816% ( 51) 00:35:10.827 2902.309 - 2917.912: 98.1441% ( 49) 00:35:10.827 2917.912 - 2933.516: 98.2028% ( 46) 00:35:10.827 2933.516 - 2949.120: 98.2628% ( 47) 00:35:10.827 2949.120 - 2964.724: 98.3151% ( 41) 00:35:10.827 2964.724 - 2980.328: 98.3737% ( 46) 00:35:10.828 2980.328 - 2995.931: 98.4273% ( 42) 00:35:10.828 2995.931 - 3011.535: 98.4796% ( 41) 00:35:10.828 3011.535 - 3027.139: 98.5306% ( 40) 00:35:10.828 3027.139 - 3042.743: 98.5765% ( 36) 00:35:10.828 3042.743 - 3058.347: 98.6224% ( 36) 00:35:10.828 3058.347 - 3073.950: 98.6645% ( 33) 00:35:10.828 3073.950 - 3089.554: 98.7092% ( 35) 00:35:10.828 3089.554 - 3105.158: 98.7500% ( 32) 00:35:10.828 3105.158 - 3120.762: 98.7883% ( 30) 00:35:10.828 3120.762 - 3136.366: 98.8240% ( 28) 00:35:10.828 3136.366 - 3151.970: 98.8546% ( 24) 00:35:10.828 3151.970 - 3167.573: 98.8929% ( 30) 00:35:10.828 3167.573 - 3183.177: 98.9286% ( 28) 00:35:10.828 3183.177 - 3198.781: 98.9617% ( 26) 00:35:10.828 3198.781 - 3214.385: 98.9949% ( 26) 00:35:10.828 3214.385 - 3229.989: 99.0293% ( 27) 00:35:10.828 3229.989 - 3245.592: 99.0587% ( 23) 00:35:10.828 3245.592 - 3261.196: 99.0906% ( 25) 00:35:10.828 3261.196 - 3276.800: 99.1224% ( 25) 00:35:10.828 3276.800 - 3292.404: 99.1505% ( 22) 00:35:10.828 3292.404 - 3308.008: 99.1811% ( 24) 00:35:10.828 3308.008 - 3323.611: 99.2079% ( 21) 00:35:10.828 3323.611 - 3339.215: 99.2347% ( 21) 00:35:10.828 3339.215 - 3354.819: 99.2577% ( 18) 00:35:10.828 3354.819 - 3370.423: 99.2781% ( 16) 00:35:10.828 3370.423 - 3386.027: 99.3048% ( 21) 00:35:10.828 3386.027 - 3401.630: 99.3253% ( 16) 00:35:10.828 3401.630 - 3417.234: 99.3469% ( 17) 00:35:10.828 3417.234 - 3432.838: 99.3712% ( 19) 00:35:10.828 3432.838 - 3448.442: 99.3916% ( 16) 00:35:10.828 3448.442 - 3464.046: 99.4107% ( 15) 00:35:10.828 3464.046 - 3479.650: 99.4286% ( 14) 00:35:10.828 3479.650 - 3495.253: 99.4452% ( 13) 00:35:10.828 3495.253 - 3510.857: 99.4592% ( 11) 00:35:10.828 3510.857 - 3526.461: 99.4732% ( 11) 00:35:10.828 3526.461 - 3542.065: 99.4834% ( 8) 00:35:10.828 3542.065 - 3557.669: 99.4974% ( 11) 00:35:10.828 3557.669 - 3573.272: 99.5115% ( 11) 00:35:10.828 3573.272 - 3588.876: 99.5230% ( 9) 00:35:10.828 3588.876 - 3604.480: 99.5370% ( 11) 00:35:10.828 3604.480 - 3620.084: 99.5497% ( 10) 00:35:10.828 3620.084 - 3635.688: 99.5612% ( 9) 00:35:10.828 3635.688 - 3651.291: 99.5689% ( 6) 00:35:10.828 3651.291 - 3666.895: 99.5804% ( 9) 00:35:10.828 3666.895 - 3682.499: 99.5893% ( 7) 00:35:10.828 3682.499 - 3698.103: 99.5957% ( 5) 00:35:10.828 3698.103 - 3713.707: 99.6046% ( 7) 00:35:10.828 3713.707 - 3729.310: 99.6097% ( 4) 00:35:10.828 3729.310 - 3744.914: 99.6161% ( 5) 00:35:10.828 3744.914 - 3760.518: 99.6224% ( 5) 00:35:10.828 3760.518 - 3776.122: 99.6263% ( 3) 00:35:10.828 3776.122 - 3791.726: 99.6314% ( 4) 00:35:10.828 3791.726 - 3807.330: 99.6365% ( 4) 00:35:10.828 3807.330 - 3822.933: 99.6416% ( 4) 00:35:10.828 3822.933 - 3838.537: 99.6480% ( 5) 00:35:10.828 3838.537 - 3854.141: 99.6518% ( 3) 00:35:10.828 3854.141 - 3869.745: 99.6569% ( 4) 00:35:10.828 3869.745 - 3885.349: 99.6620% ( 4) 00:35:10.828 3885.349 - 3900.952: 99.6645% ( 2) 00:35:10.828 3900.952 - 3916.556: 99.6684% ( 3) 00:35:10.828 3916.556 - 3932.160: 99.6709% ( 2) 00:35:10.828 3932.160 - 3947.764: 99.6722% ( 1) 00:35:10.828 3947.764 - 3963.368: 99.6747% ( 2) 00:35:10.828 3963.368 - 3978.971: 99.6760% ( 1) 00:35:10.828 3978.971 - 3994.575: 99.6798% ( 3) 00:35:10.828 3994.575 - 4025.783: 99.6824% ( 2) 00:35:10.828 4025.783 - 4056.990: 99.6888% ( 5) 00:35:10.828 4056.990 - 4088.198: 99.6926% ( 3) 00:35:10.828 4088.198 - 4119.406: 99.6952% ( 2) 00:35:10.828 4119.406 - 4150.613: 99.7003% ( 4) 00:35:10.828 4150.613 - 4181.821: 99.7054% ( 4) 00:35:10.828 4181.821 - 4213.029: 99.7079% ( 2) 00:35:10.828 4213.029 - 4244.236: 99.7130% ( 4) 00:35:10.828 4244.236 - 4275.444: 99.7168% ( 3) 00:35:10.828 4275.444 - 4306.651: 99.7219% ( 4) 00:35:10.828 4306.651 - 4337.859: 99.7258% ( 3) 00:35:10.828 4337.859 - 4369.067: 99.7296% ( 3) 00:35:10.828 4369.067 - 4400.274: 99.7347% ( 4) 00:35:10.828 4400.274 - 4431.482: 99.7372% ( 2) 00:35:10.828 4431.482 - 4462.690: 99.7385% ( 1) 00:35:10.828 4462.690 - 4493.897: 99.7398% ( 1) 00:35:10.828 4493.897 - 4525.105: 99.7411% ( 1) 00:35:10.828 4556.312 - 4587.520: 99.7423% ( 1) 00:35:10.828 4587.520 - 4618.728: 99.7436% ( 1) 00:35:10.828 4618.728 - 4649.935: 99.7449% ( 1) 00:35:10.828 4681.143 - 4712.350: 99.7462% ( 1) 00:35:10.828 4712.350 - 4743.558: 99.7474% ( 1) 00:35:10.828 4774.766 - 4805.973: 99.7487% ( 1) 00:35:10.828 4837.181 - 4868.389: 99.7500% ( 1) 00:35:10.828 4899.596 - 4930.804: 99.7513% ( 1) 00:35:10.828 4930.804 - 4962.011: 99.7551% ( 3) 00:35:10.828 4962.011 - 4993.219: 99.7602% ( 4) 00:35:10.828 4993.219 - 5024.427: 99.7653% ( 4) 00:35:10.828 5024.427 - 5055.634: 99.7704% ( 4) 00:35:10.828 5055.634 - 5086.842: 99.7755% ( 4) 00:35:10.828 5086.842 - 5118.050: 99.7793% ( 3) 00:35:10.828 5118.050 - 5149.257: 99.7857% ( 5) 00:35:10.828 5149.257 - 5180.465: 99.7908% ( 4) 00:35:10.828 5180.465 - 5211.672: 99.7959% ( 4) 00:35:10.828 5211.672 - 5242.880: 99.8023% ( 5) 00:35:10.828 5242.880 - 5274.088: 99.8087% ( 5) 00:35:10.828 5274.088 - 5305.295: 99.8125% ( 3) 00:35:10.828 5305.295 - 5336.503: 99.8189% ( 5) 00:35:10.828 5336.503 - 5367.710: 99.8240% ( 4) 00:35:10.828 5367.710 - 5398.918: 99.8304% ( 5) 00:35:10.828 5398.918 - 5430.126: 99.8367% ( 5) 00:35:10.828 5430.126 - 5461.333: 99.8431% ( 5) 00:35:10.828 5461.333 - 5492.541: 99.8482% ( 4) 00:35:10.828 5492.541 - 5523.749: 99.8546% ( 5) 00:35:10.828 5523.749 - 5554.956: 99.8597% ( 4) 00:35:10.828 5554.956 - 5586.164: 99.8648% ( 4) 00:35:10.828 5586.164 - 5617.371: 99.8712% ( 5) 00:35:10.828 5617.371 - 5648.579: 99.8776% ( 5) 00:35:10.828 5648.579 - 5679.787: 99.8839% ( 5) 00:35:10.828 5679.787 - 5710.994: 99.8903% ( 5) 00:35:10.828 5710.994 - 5742.202: 99.8954% ( 4) 00:35:10.828 5742.202 - 5773.410: 99.9018% ( 5) 00:35:10.828 5773.410 - 5804.617: 99.9082% ( 5) 00:35:10.828 5804.617 - 5835.825: 99.9133% ( 4) 00:35:10.828 5835.825 - 5867.032: 99.9184% ( 4) 00:35:10.828 5867.032 - 5898.240: 99.9247% ( 5) 00:35:10.828 5898.240 - 5929.448: 99.9311% ( 5) 00:35:10.828 5929.448 - 5960.655: 99.9375% ( 5) 00:35:10.828 5960.655 - 5991.863: 99.9413% ( 3) 00:35:10.828 5991.863 - 6023.070: 99.9452% ( 3) 00:35:10.828 6023.070 - 6054.278: 99.9490% ( 3) 00:35:10.828 6054.278 - 6085.486: 99.9503% ( 1) 00:35:10.828 6085.486 - 6116.693: 99.9515% ( 1) 00:35:10.828 6116.693 - 6147.901: 99.9528% ( 1) 00:35:10.828 6147.901 - 6179.109: 99.9541% ( 1) 00:35:10.828 6179.109 - 6210.316: 99.9554% ( 1) 00:35:10.828 6210.316 - 6241.524: 99.9566% ( 1) 00:35:10.828 6272.731 - 6303.939: 99.9579% ( 1) 00:35:10.828 6303.939 - 6335.147: 99.9592% ( 1) 00:35:10.828 6335.147 - 6366.354: 99.9605% ( 1) 00:35:10.828 6366.354 - 6397.562: 99.9617% ( 1) 00:35:10.828 6397.562 - 6428.770: 99.9630% ( 1) 00:35:10.828 6428.770 - 6459.977: 99.9643% ( 1) 00:35:10.828 6459.977 - 6491.185: 99.9656% ( 1) 00:35:10.828 6522.392 - 6553.600: 99.9668% ( 1) 00:35:10.828 6553.600 - 6584.808: 99.9681% ( 1) 00:35:10.828 6584.808 - 6616.015: 99.9694% ( 1) 00:35:10.828 6616.015 - 6647.223: 99.9707% ( 1) 00:35:10.828 6647.223 - 6678.430: 99.9719% ( 1) 00:35:10.828 6709.638 - 6740.846: 99.9732% ( 1) 00:35:10.828 6740.846 - 6772.053: 99.9745% ( 1) 00:35:10.828 6772.053 - 6803.261: 99.9758% ( 1) 00:35:10.828 6803.261 - 6834.469: 99.9770% ( 1) 00:35:10.829 6865.676 - 6896.884: 99.9783% ( 1) 00:35:10.829 6896.884 - 6928.091: 99.9796% ( 1) 00:35:10.829 6928.091 - 6959.299: 99.9809% ( 1) 00:35:10.829 6959.299 - 6990.507: 99.9821% ( 1) 00:35:10.829 6990.507 - 7021.714: 99.9834% ( 1) 00:35:10.829 7021.714 - 7052.922: 99.9847% ( 1) 00:35:10.829 7052.922 - 7084.130: 99.9860% ( 1) 00:35:10.829 7115.337 - 7146.545: 99.9872% ( 1) 00:35:10.829 7146.545 - 7177.752: 99.9885% ( 1) 00:35:10.829 7177.752 - 7208.960: 99.9898% ( 1) 00:35:10.829 7208.960 - 7240.168: 99.9911% ( 1) 00:35:10.829 7271.375 - 7302.583: 99.9923% ( 1) 00:35:10.829 7302.583 - 7333.790: 99.9936% ( 1) 00:35:10.829 7333.790 - 7364.998: 99.9949% ( 1) 00:35:10.829 7364.998 - 7396.206: 99.9962% ( 1) 00:35:10.829 7427.413 - 7458.621: 99.9974% ( 1) 00:35:10.829 7458.621 - 7489.829: 99.9987% ( 1) 00:35:10.829 7552.244 - 7583.451: 100.0000% ( 1) 00:35:10.829 00:35:10.829 00:49:04 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:35:12.206 Initializing NVMe Controllers 00:35:12.206 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:12.206 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:35:12.206 Initialization complete. Launching workers. 00:35:12.206 ======================================================== 00:35:12.206 Latency(us) 00:35:12.206 Device Information : IOPS MiB/s Average min max 00:35:12.206 PCIE (0000:00:10.0) NSID 1 from core 0: 67697.83 793.33 1889.07 748.30 8013.92 00:35:12.206 ======================================================== 00:35:12.206 Total : 67697.83 793.33 1889.07 748.30 8013.92 00:35:12.206 00:35:12.206 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:35:12.206 ================================================================================= 00:35:12.206 1.00000% : 1068.861us 00:35:12.206 10.00000% : 1256.107us 00:35:12.206 25.00000% : 1443.352us 00:35:12.206 50.00000% : 1763.230us 00:35:12.206 75.00000% : 2153.326us 00:35:12.206 90.00000% : 2668.251us 00:35:12.206 95.00000% : 3214.385us 00:35:12.206 98.00000% : 3713.707us 00:35:12.206 99.00000% : 4119.406us 00:35:12.206 99.50000% : 4525.105us 00:35:12.206 99.90000% : 5991.863us 00:35:12.206 99.99000% : 7770.697us 00:35:12.206 99.99900% : 8051.566us 00:35:12.206 99.99990% : 8051.566us 00:35:12.206 99.99999% : 8051.566us 00:35:12.206 00:35:12.206 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:35:12.206 ============================================================================== 00:35:12.206 Range in us Cumulative IO count 00:35:12.206 745.082 - 748.983: 0.0015% ( 1) 00:35:12.206 760.686 - 764.587: 0.0030% ( 1) 00:35:12.206 807.497 - 811.398: 0.0044% ( 1) 00:35:12.206 823.101 - 827.002: 0.0059% ( 1) 00:35:12.206 858.210 - 862.110: 0.0074% ( 1) 00:35:12.206 869.912 - 873.813: 0.0089% ( 1) 00:35:12.206 873.813 - 877.714: 0.0103% ( 1) 00:35:12.206 877.714 - 881.615: 0.0118% ( 1) 00:35:12.206 881.615 - 885.516: 0.0133% ( 1) 00:35:12.206 889.417 - 893.318: 0.0177% ( 3) 00:35:12.206 893.318 - 897.219: 0.0207% ( 2) 00:35:12.206 905.021 - 908.922: 0.0236% ( 2) 00:35:12.206 908.922 - 912.823: 0.0266% ( 2) 00:35:12.206 912.823 - 916.724: 0.0295% ( 2) 00:35:12.206 916.724 - 920.625: 0.0325% ( 2) 00:35:12.206 920.625 - 924.526: 0.0354% ( 2) 00:35:12.206 924.526 - 928.427: 0.0384% ( 2) 00:35:12.206 928.427 - 932.328: 0.0413% ( 2) 00:35:12.206 932.328 - 936.229: 0.0443% ( 2) 00:35:12.206 936.229 - 940.130: 0.0546% ( 7) 00:35:12.206 940.130 - 944.030: 0.0605% ( 4) 00:35:12.206 944.030 - 947.931: 0.0650% ( 3) 00:35:12.206 947.931 - 951.832: 0.0723% ( 5) 00:35:12.206 951.832 - 955.733: 0.0812% ( 6) 00:35:12.206 955.733 - 959.634: 0.0886% ( 5) 00:35:12.206 959.634 - 963.535: 0.1004% ( 8) 00:35:12.206 963.535 - 967.436: 0.1063% ( 4) 00:35:12.206 967.436 - 971.337: 0.1196% ( 9) 00:35:12.206 971.337 - 975.238: 0.1388% ( 13) 00:35:12.206 975.238 - 979.139: 0.1535% ( 10) 00:35:12.206 979.139 - 983.040: 0.1609% ( 5) 00:35:12.206 983.040 - 986.941: 0.1904% ( 20) 00:35:12.206 986.941 - 990.842: 0.2037% ( 9) 00:35:12.206 990.842 - 994.743: 0.2215% ( 12) 00:35:12.206 994.743 - 998.644: 0.2451% ( 16) 00:35:12.206 998.644 - 1006.446: 0.2908% ( 31) 00:35:12.206 1006.446 - 1014.248: 0.3410% ( 34) 00:35:12.206 1014.248 - 1022.050: 0.3942% ( 36) 00:35:12.206 1022.050 - 1029.851: 0.4518% ( 39) 00:35:12.206 1029.851 - 1037.653: 0.5477% ( 65) 00:35:12.206 1037.653 - 1045.455: 0.6393% ( 62) 00:35:12.206 1045.455 - 1053.257: 0.7470% ( 73) 00:35:12.206 1053.257 - 1061.059: 0.8563% ( 74) 00:35:12.206 1061.059 - 1068.861: 1.0039% ( 100) 00:35:12.206 1068.861 - 1076.663: 1.1353% ( 89) 00:35:12.206 1076.663 - 1084.465: 1.3007% ( 112) 00:35:12.206 1084.465 - 1092.267: 1.4955% ( 132) 00:35:12.206 1092.267 - 1100.069: 1.7037% ( 141) 00:35:12.206 1100.069 - 1107.870: 1.9370% ( 158) 00:35:12.206 1107.870 - 1115.672: 2.1835% ( 167) 00:35:12.206 1115.672 - 1123.474: 2.4729% ( 196) 00:35:12.206 1123.474 - 1131.276: 2.8242% ( 238) 00:35:12.206 1131.276 - 1139.078: 3.1653% ( 231) 00:35:12.206 1139.078 - 1146.880: 3.5019% ( 228) 00:35:12.206 1146.880 - 1154.682: 3.9226% ( 285) 00:35:12.206 1154.682 - 1162.484: 4.3286% ( 275) 00:35:12.206 1162.484 - 1170.286: 4.7464% ( 283) 00:35:12.206 1170.286 - 1178.088: 5.1864% ( 298) 00:35:12.206 1178.088 - 1185.890: 5.6559% ( 318) 00:35:12.206 1185.890 - 1193.691: 6.1593% ( 341) 00:35:12.206 1193.691 - 1201.493: 6.6701% ( 346) 00:35:12.206 1201.493 - 1209.295: 7.1854% ( 349) 00:35:12.206 1209.295 - 1217.097: 7.7006% ( 349) 00:35:12.206 1217.097 - 1224.899: 8.2424% ( 367) 00:35:12.206 1224.899 - 1232.701: 8.7695% ( 357) 00:35:12.206 1232.701 - 1240.503: 9.2950% ( 356) 00:35:12.206 1240.503 - 1248.305: 9.8723% ( 391) 00:35:12.206 1248.305 - 1256.107: 10.4141% ( 367) 00:35:12.206 1256.107 - 1263.909: 11.0032% ( 399) 00:35:12.206 1263.909 - 1271.710: 11.5834% ( 393) 00:35:12.206 1271.710 - 1279.512: 12.1562% ( 388) 00:35:12.206 1279.512 - 1287.314: 12.7526% ( 404) 00:35:12.206 1287.314 - 1295.116: 13.3624% ( 413) 00:35:12.206 1295.116 - 1302.918: 13.9426% ( 393) 00:35:12.206 1302.918 - 1310.720: 14.5287% ( 397) 00:35:12.206 1310.720 - 1318.522: 15.1104% ( 394) 00:35:12.206 1318.522 - 1326.324: 15.7540% ( 436) 00:35:12.206 1326.324 - 1334.126: 16.3342% ( 393) 00:35:12.206 1334.126 - 1341.928: 16.9351% ( 407) 00:35:12.206 1341.928 - 1349.730: 17.5227% ( 398) 00:35:12.206 1349.730 - 1357.531: 18.1708% ( 439) 00:35:12.206 1357.531 - 1365.333: 18.7850% ( 416) 00:35:12.206 1365.333 - 1373.135: 19.4021% ( 418) 00:35:12.206 1373.135 - 1380.937: 19.9852% ( 395) 00:35:12.206 1380.937 - 1388.739: 20.6201% ( 430) 00:35:12.206 1388.739 - 1396.541: 21.2623% ( 435) 00:35:12.206 1396.541 - 1404.343: 21.9178% ( 444) 00:35:12.206 1404.343 - 1412.145: 22.5113% ( 402) 00:35:12.206 1412.145 - 1419.947: 23.1786% ( 452) 00:35:12.206 1419.947 - 1427.749: 23.7972% ( 419) 00:35:12.206 1427.749 - 1435.550: 24.4216% ( 423) 00:35:12.206 1435.550 - 1443.352: 25.0845% ( 449) 00:35:12.206 1443.352 - 1451.154: 25.7356% ( 441) 00:35:12.206 1451.154 - 1458.956: 26.3557% ( 420) 00:35:12.206 1458.956 - 1466.758: 27.0244% ( 453) 00:35:12.206 1466.758 - 1474.560: 27.7360% ( 482) 00:35:12.206 1474.560 - 1482.362: 28.4181% ( 462) 00:35:12.206 1482.362 - 1490.164: 29.1164% ( 473) 00:35:12.207 1490.164 - 1497.966: 29.7468% ( 427) 00:35:12.207 1497.966 - 1505.768: 30.4628% ( 485) 00:35:12.207 1505.768 - 1513.570: 31.1065% ( 436) 00:35:12.207 1513.570 - 1521.371: 31.8639% ( 513) 00:35:12.207 1521.371 - 1529.173: 32.5061% ( 435) 00:35:12.207 1529.173 - 1536.975: 33.1690% ( 449) 00:35:12.207 1536.975 - 1544.777: 33.8717% ( 476) 00:35:12.207 1544.777 - 1552.579: 34.5139% ( 435) 00:35:12.207 1552.579 - 1560.381: 35.1532% ( 433) 00:35:12.207 1560.381 - 1568.183: 35.8352% ( 462) 00:35:12.207 1568.183 - 1575.985: 36.4509% ( 417) 00:35:12.207 1575.985 - 1583.787: 37.1167% ( 451) 00:35:12.207 1583.787 - 1591.589: 37.7220% ( 410) 00:35:12.207 1591.589 - 1599.390: 38.3450% ( 422) 00:35:12.207 1599.390 - 1607.192: 38.9902% ( 437) 00:35:12.207 1607.192 - 1614.994: 39.6088% ( 419) 00:35:12.207 1614.994 - 1622.796: 40.1742% ( 383) 00:35:12.207 1622.796 - 1630.598: 40.7707% ( 404) 00:35:12.207 1630.598 - 1638.400: 41.4025% ( 428) 00:35:12.207 1638.400 - 1646.202: 41.9443% ( 367) 00:35:12.207 1646.202 - 1654.004: 42.5157% ( 387) 00:35:12.207 1654.004 - 1661.806: 43.0944% ( 392) 00:35:12.207 1661.806 - 1669.608: 43.6687% ( 389) 00:35:12.207 1669.608 - 1677.410: 44.2209% ( 374) 00:35:12.207 1677.410 - 1685.211: 44.7730% ( 374) 00:35:12.207 1685.211 - 1693.013: 45.3355% ( 381) 00:35:12.207 1693.013 - 1700.815: 45.9157% ( 393) 00:35:12.207 1700.815 - 1708.617: 46.4885% ( 388) 00:35:12.207 1708.617 - 1716.419: 47.0717% ( 395) 00:35:12.207 1716.419 - 1724.221: 47.6120% ( 366) 00:35:12.207 1724.221 - 1732.023: 48.1597% ( 371) 00:35:12.207 1732.023 - 1739.825: 48.7030% ( 368) 00:35:12.207 1739.825 - 1747.627: 49.2744% ( 387) 00:35:12.207 1747.627 - 1755.429: 49.8369% ( 381) 00:35:12.207 1755.429 - 1763.230: 50.3831% ( 370) 00:35:12.207 1763.230 - 1771.032: 50.9220% ( 365) 00:35:12.207 1771.032 - 1778.834: 51.4874% ( 383) 00:35:12.207 1778.834 - 1786.636: 52.0469% ( 379) 00:35:12.207 1786.636 - 1794.438: 52.6183% ( 387) 00:35:12.207 1794.438 - 1802.240: 53.1557% ( 364) 00:35:12.207 1802.240 - 1810.042: 53.7093% ( 375) 00:35:12.207 1810.042 - 1817.844: 54.2364% ( 357) 00:35:12.207 1817.844 - 1825.646: 54.8180% ( 394) 00:35:12.207 1825.646 - 1833.448: 55.3702% ( 374) 00:35:12.207 1833.448 - 1841.250: 55.8899% ( 352) 00:35:12.207 1841.250 - 1849.051: 56.4494% ( 379) 00:35:12.207 1849.051 - 1856.853: 56.9705% ( 353) 00:35:12.207 1856.853 - 1864.655: 57.5404% ( 386) 00:35:12.207 1864.655 - 1872.457: 58.0734% ( 361) 00:35:12.207 1872.457 - 1880.259: 58.6477% ( 389) 00:35:12.207 1880.259 - 1888.061: 59.1614% ( 348) 00:35:12.207 1888.061 - 1895.863: 59.7254% ( 382) 00:35:12.207 1895.863 - 1903.665: 60.2776% ( 374) 00:35:12.207 1903.665 - 1911.467: 60.8504% ( 388) 00:35:12.207 1911.467 - 1919.269: 61.3760% ( 356) 00:35:12.207 1919.269 - 1927.070: 61.9680% ( 401) 00:35:12.207 1927.070 - 1934.872: 62.4862% ( 351) 00:35:12.207 1934.872 - 1942.674: 63.0590% ( 388) 00:35:12.207 1942.674 - 1950.476: 63.6111% ( 374) 00:35:12.207 1950.476 - 1958.278: 64.1559% ( 369) 00:35:12.207 1958.278 - 1966.080: 64.6918% ( 363) 00:35:12.207 1966.080 - 1973.882: 65.2395% ( 371) 00:35:12.207 1973.882 - 1981.684: 65.7828% ( 368) 00:35:12.207 1981.684 - 1989.486: 66.3069% ( 355) 00:35:12.207 1989.486 - 1997.288: 66.8428% ( 363) 00:35:12.207 1997.288 - 2012.891: 67.8659% ( 693) 00:35:12.207 2012.891 - 2028.495: 68.8669% ( 678) 00:35:12.207 2028.495 - 2044.099: 69.8088% ( 638) 00:35:12.207 2044.099 - 2059.703: 70.6990% ( 603) 00:35:12.207 2059.703 - 2075.307: 71.5524% ( 578) 00:35:12.207 2075.307 - 2090.910: 72.3939% ( 570) 00:35:12.207 2090.910 - 2106.514: 73.1837% ( 535) 00:35:12.207 2106.514 - 2122.118: 73.9500% ( 519) 00:35:12.207 2122.118 - 2137.722: 74.7117% ( 516) 00:35:12.207 2137.722 - 2153.326: 75.4514% ( 501) 00:35:12.207 2153.326 - 2168.930: 76.1792% ( 493) 00:35:12.207 2168.930 - 2184.533: 76.8864% ( 479) 00:35:12.207 2184.533 - 2200.137: 77.5729% ( 465) 00:35:12.207 2200.137 - 2215.741: 78.2535% ( 461) 00:35:12.207 2215.741 - 2231.345: 78.8913% ( 432) 00:35:12.207 2231.345 - 2246.949: 79.5231% ( 428) 00:35:12.207 2246.949 - 2262.552: 80.1255% ( 408) 00:35:12.207 2262.552 - 2278.156: 80.7441% ( 419) 00:35:12.207 2278.156 - 2293.760: 81.3302% ( 397) 00:35:12.207 2293.760 - 2309.364: 81.8897% ( 379) 00:35:12.207 2309.364 - 2324.968: 82.4596% ( 386) 00:35:12.207 2324.968 - 2340.571: 82.9896% ( 359) 00:35:12.207 2340.571 - 2356.175: 83.5196% ( 359) 00:35:12.207 2356.175 - 2371.779: 84.0275% ( 344) 00:35:12.207 2371.779 - 2387.383: 84.4984% ( 319) 00:35:12.207 2387.383 - 2402.987: 84.9546% ( 309) 00:35:12.207 2402.987 - 2418.590: 85.4049% ( 305) 00:35:12.207 2418.590 - 2434.194: 85.7991% ( 267) 00:35:12.207 2434.194 - 2449.798: 86.2095% ( 278) 00:35:12.207 2449.798 - 2465.402: 86.5741% ( 247) 00:35:12.207 2465.402 - 2481.006: 86.9418% ( 249) 00:35:12.207 2481.006 - 2496.610: 87.2769% ( 227) 00:35:12.207 2496.610 - 2512.213: 87.6002% ( 219) 00:35:12.207 2512.213 - 2527.817: 87.9161% ( 214) 00:35:12.207 2527.817 - 2543.421: 88.1981% ( 191) 00:35:12.207 2543.421 - 2559.025: 88.4742% ( 187) 00:35:12.207 2559.025 - 2574.629: 88.7458% ( 184) 00:35:12.207 2574.629 - 2590.232: 88.9998% ( 172) 00:35:12.207 2590.232 - 2605.836: 89.2449% ( 166) 00:35:12.207 2605.836 - 2621.440: 89.4737% ( 155) 00:35:12.207 2621.440 - 2637.044: 89.6966% ( 151) 00:35:12.207 2637.044 - 2652.648: 89.9122% ( 146) 00:35:12.207 2652.648 - 2668.251: 90.1100% ( 134) 00:35:12.207 2668.251 - 2683.855: 90.2990% ( 128) 00:35:12.207 2683.855 - 2699.459: 90.4983% ( 135) 00:35:12.207 2699.459 - 2715.063: 90.6754% ( 120) 00:35:12.207 2715.063 - 2730.667: 90.8688% ( 131) 00:35:12.207 2730.667 - 2746.270: 91.0357% ( 113) 00:35:12.207 2746.270 - 2761.874: 91.2158% ( 122) 00:35:12.207 2761.874 - 2777.478: 91.3870% ( 116) 00:35:12.207 2777.478 - 2793.082: 91.5361% ( 101) 00:35:12.207 2793.082 - 2808.686: 91.6882% ( 103) 00:35:12.207 2808.686 - 2824.290: 91.8388% ( 102) 00:35:12.207 2824.290 - 2839.893: 92.0027% ( 111) 00:35:12.207 2839.893 - 2855.497: 92.1606% ( 107) 00:35:12.207 2855.497 - 2871.101: 92.3083% ( 100) 00:35:12.207 2871.101 - 2886.705: 92.4559% ( 100) 00:35:12.207 2886.705 - 2902.309: 92.6124% ( 106) 00:35:12.207 2902.309 - 2917.912: 92.7585% ( 99) 00:35:12.207 2917.912 - 2933.516: 92.8958% ( 93) 00:35:12.207 2933.516 - 2949.120: 93.0228% ( 86) 00:35:12.207 2949.120 - 2964.724: 93.1483% ( 85) 00:35:12.207 2964.724 - 2980.328: 93.2708% ( 83) 00:35:12.207 2980.328 - 2995.931: 93.3934% ( 83) 00:35:12.207 2995.931 - 3011.535: 93.5203% ( 86) 00:35:12.207 3011.535 - 3027.139: 93.6503% ( 88) 00:35:12.207 3027.139 - 3042.743: 93.7787% ( 87) 00:35:12.207 3042.743 - 3058.347: 93.8894% ( 75) 00:35:12.207 3058.347 - 3073.950: 94.0046% ( 78) 00:35:12.207 3073.950 - 3089.554: 94.1197% ( 78) 00:35:12.207 3089.554 - 3105.158: 94.2349% ( 78) 00:35:12.207 3105.158 - 3120.762: 94.3604% ( 85) 00:35:12.207 3120.762 - 3136.366: 94.4800% ( 81) 00:35:12.207 3136.366 - 3151.970: 94.5966% ( 79) 00:35:12.207 3151.970 - 3167.573: 94.7117% ( 78) 00:35:12.207 3167.573 - 3183.177: 94.8151% ( 70) 00:35:12.207 3183.177 - 3198.781: 94.9288% ( 77) 00:35:12.207 3198.781 - 3214.385: 95.0395% ( 75) 00:35:12.207 3214.385 - 3229.989: 95.1473% ( 73) 00:35:12.207 3229.989 - 3245.592: 95.2447% ( 66) 00:35:12.207 3245.592 - 3261.196: 95.3554% ( 75) 00:35:12.208 3261.196 - 3276.800: 95.4617% ( 72) 00:35:12.208 3276.800 - 3292.404: 95.5621% ( 68) 00:35:12.208 3292.404 - 3308.008: 95.6640% ( 69) 00:35:12.208 3308.008 - 3323.611: 95.7791% ( 78) 00:35:12.208 3323.611 - 3339.215: 95.8810% ( 69) 00:35:12.208 3339.215 - 3354.819: 95.9873% ( 72) 00:35:12.208 3354.819 - 3370.423: 96.1010% ( 77) 00:35:12.208 3370.423 - 3386.027: 96.2102% ( 74) 00:35:12.208 3386.027 - 3401.630: 96.3254% ( 78) 00:35:12.208 3401.630 - 3417.234: 96.4435% ( 80) 00:35:12.208 3417.234 - 3432.838: 96.5542% ( 75) 00:35:12.208 3432.838 - 3448.442: 96.6502% ( 65) 00:35:12.208 3448.442 - 3464.046: 96.7535% ( 70) 00:35:12.208 3464.046 - 3479.650: 96.8657% ( 76) 00:35:12.208 3479.650 - 3495.253: 96.9661% ( 68) 00:35:12.208 3495.253 - 3510.857: 97.0517% ( 58) 00:35:12.208 3510.857 - 3526.461: 97.1388% ( 59) 00:35:12.208 3526.461 - 3542.065: 97.2333% ( 64) 00:35:12.208 3542.065 - 3557.669: 97.3160% ( 56) 00:35:12.208 3557.669 - 3573.272: 97.3987% ( 56) 00:35:12.208 3573.272 - 3588.876: 97.4740% ( 51) 00:35:12.208 3588.876 - 3604.480: 97.5434% ( 47) 00:35:12.208 3604.480 - 3620.084: 97.6216% ( 53) 00:35:12.208 3620.084 - 3635.688: 97.6940% ( 49) 00:35:12.208 3635.688 - 3651.291: 97.7574% ( 43) 00:35:12.208 3651.291 - 3666.895: 97.8253% ( 46) 00:35:12.208 3666.895 - 3682.499: 97.8829% ( 39) 00:35:12.208 3682.499 - 3698.103: 97.9479% ( 44) 00:35:12.208 3698.103 - 3713.707: 98.0010% ( 36) 00:35:12.208 3713.707 - 3729.310: 98.0616% ( 41) 00:35:12.208 3729.310 - 3744.914: 98.1206% ( 40) 00:35:12.208 3744.914 - 3760.518: 98.1708% ( 34) 00:35:12.208 3760.518 - 3776.122: 98.2225% ( 35) 00:35:12.208 3776.122 - 3791.726: 98.2683% ( 31) 00:35:12.208 3791.726 - 3807.330: 98.3140% ( 31) 00:35:12.208 3807.330 - 3822.933: 98.3642% ( 34) 00:35:12.208 3822.933 - 3838.537: 98.4115% ( 32) 00:35:12.208 3838.537 - 3854.141: 98.4617% ( 34) 00:35:12.208 3854.141 - 3869.745: 98.5074% ( 31) 00:35:12.208 3869.745 - 3885.349: 98.5517% ( 30) 00:35:12.208 3885.349 - 3900.952: 98.5930% ( 28) 00:35:12.208 3900.952 - 3916.556: 98.6300% ( 25) 00:35:12.208 3916.556 - 3932.160: 98.6624% ( 22) 00:35:12.208 3932.160 - 3947.764: 98.6979% ( 24) 00:35:12.208 3947.764 - 3963.368: 98.7348% ( 25) 00:35:12.208 3963.368 - 3978.971: 98.7643% ( 20) 00:35:12.208 3978.971 - 3994.575: 98.7938% ( 20) 00:35:12.208 3994.575 - 4025.783: 98.8514% ( 39) 00:35:12.208 4025.783 - 4056.990: 98.9060% ( 37) 00:35:12.208 4056.990 - 4088.198: 98.9636% ( 39) 00:35:12.208 4088.198 - 4119.406: 99.0241% ( 41) 00:35:12.208 4119.406 - 4150.613: 99.0906% ( 45) 00:35:12.208 4150.613 - 4181.821: 99.1437% ( 36) 00:35:12.208 4181.821 - 4213.029: 99.1924% ( 33) 00:35:12.208 4213.029 - 4244.236: 99.2397% ( 32) 00:35:12.208 4244.236 - 4275.444: 99.2825% ( 29) 00:35:12.208 4275.444 - 4306.651: 99.3209% ( 26) 00:35:12.208 4306.651 - 4337.859: 99.3548% ( 23) 00:35:12.208 4337.859 - 4369.067: 99.3829% ( 19) 00:35:12.208 4369.067 - 4400.274: 99.4124% ( 20) 00:35:12.208 4400.274 - 4431.482: 99.4375% ( 17) 00:35:12.208 4431.482 - 4462.690: 99.4611% ( 16) 00:35:12.208 4462.690 - 4493.897: 99.4848% ( 16) 00:35:12.208 4493.897 - 4525.105: 99.5099% ( 17) 00:35:12.208 4525.105 - 4556.312: 99.5276% ( 12) 00:35:12.208 4556.312 - 4587.520: 99.5468% ( 13) 00:35:12.208 4587.520 - 4618.728: 99.5615% ( 10) 00:35:12.208 4618.728 - 4649.935: 99.5748% ( 9) 00:35:12.208 4649.935 - 4681.143: 99.5881% ( 9) 00:35:12.208 4681.143 - 4712.350: 99.6043% ( 11) 00:35:12.208 4712.350 - 4743.558: 99.6176% ( 9) 00:35:12.208 4743.558 - 4774.766: 99.6309% ( 9) 00:35:12.208 4774.766 - 4805.973: 99.6457% ( 10) 00:35:12.208 4805.973 - 4837.181: 99.6604% ( 10) 00:35:12.208 4837.181 - 4868.389: 99.6752% ( 10) 00:35:12.208 4868.389 - 4899.596: 99.6870% ( 8) 00:35:12.208 4899.596 - 4930.804: 99.7003% ( 9) 00:35:12.208 4930.804 - 4962.011: 99.7121% ( 8) 00:35:12.208 4962.011 - 4993.219: 99.7239% ( 8) 00:35:12.208 4993.219 - 5024.427: 99.7313% ( 5) 00:35:12.208 5024.427 - 5055.634: 99.7402% ( 6) 00:35:12.208 5055.634 - 5086.842: 99.7475% ( 5) 00:35:12.208 5086.842 - 5118.050: 99.7520% ( 3) 00:35:12.208 5118.050 - 5149.257: 99.7579% ( 4) 00:35:12.208 5149.257 - 5180.465: 99.7653% ( 5) 00:35:12.208 5180.465 - 5211.672: 99.7726% ( 5) 00:35:12.208 5211.672 - 5242.880: 99.7800% ( 5) 00:35:12.208 5242.880 - 5274.088: 99.7859% ( 4) 00:35:12.208 5274.088 - 5305.295: 99.7918% ( 4) 00:35:12.208 5305.295 - 5336.503: 99.7977% ( 4) 00:35:12.208 5336.503 - 5367.710: 99.8051% ( 5) 00:35:12.208 5367.710 - 5398.918: 99.8125% ( 5) 00:35:12.208 5398.918 - 5430.126: 99.8184% ( 4) 00:35:12.208 5430.126 - 5461.333: 99.8258% ( 5) 00:35:12.208 5461.333 - 5492.541: 99.8332% ( 5) 00:35:12.208 5492.541 - 5523.749: 99.8406% ( 5) 00:35:12.208 5523.749 - 5554.956: 99.8465% ( 4) 00:35:12.208 5554.956 - 5586.164: 99.8524% ( 4) 00:35:12.208 5586.164 - 5617.371: 99.8553% ( 2) 00:35:12.208 5617.371 - 5648.579: 99.8597% ( 3) 00:35:12.208 5648.579 - 5679.787: 99.8642% ( 3) 00:35:12.208 5679.787 - 5710.994: 99.8686% ( 3) 00:35:12.208 5710.994 - 5742.202: 99.8730% ( 3) 00:35:12.208 5742.202 - 5773.410: 99.8775% ( 3) 00:35:12.208 5773.410 - 5804.617: 99.8804% ( 2) 00:35:12.208 5804.617 - 5835.825: 99.8834% ( 2) 00:35:12.208 5835.825 - 5867.032: 99.8878% ( 3) 00:35:12.208 5867.032 - 5898.240: 99.8922% ( 3) 00:35:12.208 5898.240 - 5929.448: 99.8967% ( 3) 00:35:12.208 5929.448 - 5960.655: 99.8996% ( 2) 00:35:12.208 5960.655 - 5991.863: 99.9026% ( 2) 00:35:12.208 5991.863 - 6023.070: 99.9070% ( 3) 00:35:12.208 6023.070 - 6054.278: 99.9099% ( 2) 00:35:12.208 6054.278 - 6085.486: 99.9144% ( 3) 00:35:12.208 6085.486 - 6116.693: 99.9188% ( 3) 00:35:12.208 6116.693 - 6147.901: 99.9232% ( 3) 00:35:12.208 6147.901 - 6179.109: 99.9277% ( 3) 00:35:12.208 6179.109 - 6210.316: 99.9306% ( 2) 00:35:12.208 6210.316 - 6241.524: 99.9321% ( 1) 00:35:12.208 6272.731 - 6303.939: 99.9336% ( 1) 00:35:12.208 6303.939 - 6335.147: 99.9350% ( 1) 00:35:12.208 6335.147 - 6366.354: 99.9365% ( 1) 00:35:12.208 6366.354 - 6397.562: 99.9380% ( 1) 00:35:12.208 6397.562 - 6428.770: 99.9395% ( 1) 00:35:12.208 6428.770 - 6459.977: 99.9409% ( 1) 00:35:12.208 6459.977 - 6491.185: 99.9424% ( 1) 00:35:12.208 6522.392 - 6553.600: 99.9439% ( 1) 00:35:12.208 6553.600 - 6584.808: 99.9454% ( 1) 00:35:12.208 6584.808 - 6616.015: 99.9469% ( 1) 00:35:12.208 6616.015 - 6647.223: 99.9483% ( 1) 00:35:12.208 6647.223 - 6678.430: 99.9498% ( 1) 00:35:12.208 6678.430 - 6709.638: 99.9513% ( 1) 00:35:12.208 6709.638 - 6740.846: 99.9528% ( 1) 00:35:12.208 6772.053 - 6803.261: 99.9542% ( 1) 00:35:12.208 6803.261 - 6834.469: 99.9557% ( 1) 00:35:12.208 6865.676 - 6896.884: 99.9572% ( 1) 00:35:12.208 6896.884 - 6928.091: 99.9587% ( 1) 00:35:12.208 6959.299 - 6990.507: 99.9601% ( 1) 00:35:12.208 6990.507 - 7021.714: 99.9616% ( 1) 00:35:12.208 7021.714 - 7052.922: 99.9631% ( 1) 00:35:12.208 7052.922 - 7084.130: 99.9646% ( 1) 00:35:12.208 7115.337 - 7146.545: 99.9660% ( 1) 00:35:12.208 7146.545 - 7177.752: 99.9675% ( 1) 00:35:12.208 7177.752 - 7208.960: 99.9690% ( 1) 00:35:12.208 7208.960 - 7240.168: 99.9705% ( 1) 00:35:12.208 7271.375 - 7302.583: 99.9719% ( 1) 00:35:12.208 7302.583 - 7333.790: 99.9734% ( 1) 00:35:12.208 7333.790 - 7364.998: 99.9749% ( 1) 00:35:12.208 7364.998 - 7396.206: 99.9764% ( 1) 00:35:12.208 7396.206 - 7427.413: 99.9779% ( 1) 00:35:12.208 7427.413 - 7458.621: 99.9793% ( 1) 00:35:12.208 7458.621 - 7489.829: 99.9808% ( 1) 00:35:12.208 7521.036 - 7552.244: 99.9823% ( 1) 00:35:12.208 7552.244 - 7583.451: 99.9838% ( 1) 00:35:12.208 7583.451 - 7614.659: 99.9852% ( 1) 00:35:12.208 7645.867 - 7677.074: 99.9867% ( 1) 00:35:12.208 7677.074 - 7708.282: 99.9882% ( 1) 00:35:12.209 7708.282 - 7739.490: 99.9897% ( 1) 00:35:12.209 7739.490 - 7770.697: 99.9911% ( 1) 00:35:12.209 7801.905 - 7833.112: 99.9926% ( 1) 00:35:12.209 7833.112 - 7864.320: 99.9941% ( 1) 00:35:12.209 7864.320 - 7895.528: 99.9956% ( 1) 00:35:12.209 7895.528 - 7926.735: 99.9970% ( 1) 00:35:12.209 7926.735 - 7957.943: 99.9985% ( 1) 00:35:12.209 7989.150 - 8051.566: 100.0000% ( 1) 00:35:12.209 00:35:12.209 ************************************ 00:35:12.209 END TEST nvme_perf 00:35:12.209 ************************************ 00:35:12.209 00:49:05 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:35:12.209 00:35:12.209 real 0m2.779s 00:35:12.209 user 0m2.301s 00:35:12.209 sys 0m0.326s 00:35:12.209 00:49:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:12.209 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:35:12.209 00:49:05 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:35:12.209 00:49:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:35:12.209 00:49:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:12.209 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:35:12.209 ************************************ 00:35:12.209 START TEST nvme_hello_world 00:35:12.209 ************************************ 00:35:12.209 00:49:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:35:12.788 Initializing NVMe Controllers 00:35:12.788 Attached to 0000:00:10.0 00:35:12.788 Namespace ID: 1 size: 5GB 00:35:12.788 Initialization complete. 00:35:12.788 INFO: using host memory buffer for IO 00:35:12.788 Hello world! 00:35:12.788 ************************************ 00:35:12.788 END TEST nvme_hello_world 00:35:12.788 ************************************ 00:35:12.788 00:35:12.788 real 0m0.393s 00:35:12.788 user 0m0.114s 00:35:12.788 sys 0m0.183s 00:35:12.788 00:49:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:12.788 00:49:06 -- common/autotest_common.sh@10 -- # set +x 00:35:12.788 00:49:06 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:35:12.788 00:49:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:12.788 00:49:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:12.788 00:49:06 -- common/autotest_common.sh@10 -- # set +x 00:35:12.788 ************************************ 00:35:12.788 START TEST nvme_sgl 00:35:12.788 ************************************ 00:35:12.788 00:49:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:35:13.046 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:35:13.046 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:35:13.046 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:35:13.046 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:35:13.046 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:35:13.046 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:35:13.046 NVMe Readv/Writev Request test 00:35:13.046 Attached to 0000:00:10.0 00:35:13.046 0000:00:10.0: build_io_request_2 test passed 00:35:13.046 0000:00:10.0: build_io_request_4 test passed 00:35:13.046 0000:00:10.0: build_io_request_5 test passed 00:35:13.046 0000:00:10.0: build_io_request_6 test passed 00:35:13.046 0000:00:10.0: build_io_request_7 test passed 00:35:13.046 0000:00:10.0: build_io_request_10 test passed 00:35:13.046 Cleaning up... 00:35:13.305 ************************************ 00:35:13.305 END TEST nvme_sgl 00:35:13.305 ************************************ 00:35:13.305 00:35:13.305 real 0m0.447s 00:35:13.305 user 0m0.214s 00:35:13.305 sys 0m0.139s 00:35:13.305 00:49:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:13.305 00:49:06 -- common/autotest_common.sh@10 -- # set +x 00:35:13.305 00:49:06 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:35:13.305 00:49:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:13.305 00:49:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:13.305 00:49:06 -- common/autotest_common.sh@10 -- # set +x 00:35:13.305 ************************************ 00:35:13.305 START TEST nvme_e2edp 00:35:13.305 ************************************ 00:35:13.305 00:49:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:35:13.563 NVMe Write/Read with End-to-End data protection test 00:35:13.563 Attached to 0000:00:10.0 00:35:13.563 Cleaning up... 00:35:13.563 ************************************ 00:35:13.563 END TEST nvme_e2edp 00:35:13.563 ************************************ 00:35:13.563 00:35:13.563 real 0m0.406s 00:35:13.563 user 0m0.101s 00:35:13.563 sys 0m0.225s 00:35:13.563 00:49:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:13.563 00:49:07 -- common/autotest_common.sh@10 -- # set +x 00:35:13.821 00:49:07 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:35:13.821 00:49:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:13.821 00:49:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:13.821 00:49:07 -- common/autotest_common.sh@10 -- # set +x 00:35:13.821 ************************************ 00:35:13.821 START TEST nvme_reserve 00:35:13.821 ************************************ 00:35:13.821 00:49:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:35:14.080 ===================================================== 00:35:14.080 NVMe Controller at PCI bus 0, device 16, function 0 00:35:14.080 ===================================================== 00:35:14.080 Reservations: Not Supported 00:35:14.080 Reservation test passed 00:35:14.080 00:35:14.080 real 0m0.381s 00:35:14.080 user 0m0.144s 00:35:14.080 sys 0m0.149s 00:35:14.080 00:49:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:14.080 00:49:07 -- common/autotest_common.sh@10 -- # set +x 00:35:14.080 ************************************ 00:35:14.080 END TEST nvme_reserve 00:35:14.080 ************************************ 00:35:14.080 00:49:07 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:35:14.080 00:49:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:14.080 00:49:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:14.080 00:49:07 -- common/autotest_common.sh@10 -- # set +x 00:35:14.361 ************************************ 00:35:14.361 START TEST nvme_err_injection 00:35:14.361 ************************************ 00:35:14.361 00:49:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:35:14.620 NVMe Error Injection test 00:35:14.620 Attached to 0000:00:10.0 00:35:14.620 0000:00:10.0: get features failed as expected 00:35:14.620 0000:00:10.0: get features successfully as expected 00:35:14.620 0000:00:10.0: read failed as expected 00:35:14.620 0000:00:10.0: read successfully as expected 00:35:14.620 Cleaning up... 00:35:14.620 ************************************ 00:35:14.620 END TEST nvme_err_injection 00:35:14.620 ************************************ 00:35:14.620 00:35:14.620 real 0m0.367s 00:35:14.620 user 0m0.120s 00:35:14.620 sys 0m0.184s 00:35:14.620 00:49:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:14.620 00:49:08 -- common/autotest_common.sh@10 -- # set +x 00:35:14.620 00:49:08 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:35:14.620 00:49:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:35:14.620 00:49:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:14.620 00:49:08 -- common/autotest_common.sh@10 -- # set +x 00:35:14.620 ************************************ 00:35:14.620 START TEST nvme_overhead 00:35:14.620 ************************************ 00:35:14.620 00:49:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:35:15.998 Initializing NVMe Controllers 00:35:15.998 Attached to 0000:00:10.0 00:35:15.998 Initialization complete. Launching workers. 00:35:15.998 submit (in ns) avg, min, max = 14782.5, 11288.6, 128101.0 00:35:15.998 complete (in ns) avg, min, max = 9500.1, 7818.1, 186462.9 00:35:15.998 00:35:15.998 Submit histogram 00:35:15.998 ================ 00:35:15.998 Range in us Cumulative Count 00:35:15.998 11.276 - 11.337: 0.0083% ( 1) 00:35:15.998 12.312 - 12.373: 0.0250% ( 2) 00:35:15.998 12.373 - 12.434: 0.0333% ( 1) 00:35:15.998 12.434 - 12.495: 0.0832% ( 6) 00:35:15.998 12.495 - 12.556: 0.2081% ( 15) 00:35:15.998 12.556 - 12.617: 0.3246% ( 14) 00:35:15.998 12.617 - 12.678: 0.4412% ( 14) 00:35:15.998 12.678 - 12.739: 0.5827% ( 17) 00:35:15.998 12.739 - 12.800: 0.6243% ( 5) 00:35:15.998 12.800 - 12.861: 0.6909% ( 8) 00:35:15.998 12.861 - 12.922: 0.7574% ( 8) 00:35:15.998 12.922 - 12.983: 0.7907% ( 4) 00:35:15.998 12.983 - 13.044: 0.8074% ( 2) 00:35:15.998 13.044 - 13.105: 0.8157% ( 1) 00:35:15.998 13.105 - 13.166: 0.8240% ( 1) 00:35:15.998 13.166 - 13.227: 0.8407% ( 2) 00:35:15.998 13.227 - 13.288: 0.8740% ( 4) 00:35:15.998 13.288 - 13.349: 1.0571% ( 22) 00:35:15.998 13.349 - 13.410: 1.6730% ( 74) 00:35:15.998 13.410 - 13.470: 3.0215% ( 162) 00:35:15.998 13.470 - 13.531: 5.4520% ( 292) 00:35:15.998 13.531 - 13.592: 8.3985% ( 354) 00:35:15.998 13.592 - 13.653: 11.9777% ( 430) 00:35:15.998 13.653 - 13.714: 14.8327% ( 343) 00:35:15.998 13.714 - 13.775: 17.1134% ( 274) 00:35:15.998 13.775 - 13.836: 18.3702% ( 151) 00:35:15.998 13.836 - 13.897: 19.2858% ( 110) 00:35:15.998 13.897 - 13.958: 19.9850% ( 84) 00:35:15.998 13.958 - 14.019: 20.4345% ( 54) 00:35:15.998 14.019 - 14.080: 20.9006% ( 56) 00:35:15.998 14.080 - 14.141: 21.8495% ( 114) 00:35:15.998 14.141 - 14.202: 24.1219% ( 273) 00:35:15.998 14.202 - 14.263: 28.3253% ( 505) 00:35:15.998 14.263 - 14.324: 35.2755% ( 835) 00:35:15.998 14.324 - 14.385: 44.0403% ( 1053) 00:35:15.998 14.385 - 14.446: 52.6969% ( 1040) 00:35:15.998 14.446 - 14.507: 59.8801% ( 863) 00:35:15.998 14.507 - 14.568: 65.2822% ( 649) 00:35:15.998 14.568 - 14.629: 69.5355% ( 511) 00:35:15.998 14.629 - 14.690: 73.0814% ( 426) 00:35:15.998 14.690 - 14.750: 76.2777% ( 384) 00:35:15.998 14.750 - 14.811: 79.5905% ( 398) 00:35:15.998 14.811 - 14.872: 82.2124% ( 315) 00:35:15.998 14.872 - 14.933: 84.4515% ( 269) 00:35:15.998 14.933 - 14.994: 85.9414% ( 179) 00:35:15.998 14.994 - 15.055: 87.1150% ( 141) 00:35:15.998 15.055 - 15.116: 88.1472% ( 124) 00:35:15.998 15.116 - 15.177: 88.9878% ( 101) 00:35:15.998 15.177 - 15.238: 89.7786% ( 95) 00:35:15.998 15.238 - 15.299: 90.3196% ( 65) 00:35:15.998 15.299 - 15.360: 90.9273% ( 73) 00:35:15.998 15.360 - 15.421: 91.3684% ( 53) 00:35:15.998 15.421 - 15.482: 91.8096% ( 53) 00:35:15.998 15.482 - 15.543: 92.3173% ( 61) 00:35:15.998 15.543 - 15.604: 92.6253% ( 37) 00:35:15.998 15.604 - 15.726: 93.0581% ( 52) 00:35:15.998 15.726 - 15.848: 93.5242% ( 56) 00:35:15.998 15.848 - 15.970: 93.8155% ( 35) 00:35:15.998 15.970 - 16.091: 94.1235% ( 37) 00:35:15.998 16.091 - 16.213: 94.1984% ( 9) 00:35:15.998 16.213 - 16.335: 94.2733% ( 9) 00:35:15.998 16.335 - 16.457: 94.3399% ( 8) 00:35:15.998 16.457 - 16.579: 94.4232% ( 10) 00:35:15.998 16.579 - 16.701: 94.4565% ( 4) 00:35:15.998 16.701 - 16.823: 94.5397% ( 10) 00:35:15.998 16.823 - 16.945: 94.6146% ( 9) 00:35:15.998 16.945 - 17.067: 94.8976% ( 34) 00:35:15.998 17.067 - 17.189: 95.1640% ( 32) 00:35:15.998 17.189 - 17.310: 95.4387% ( 33) 00:35:15.998 17.310 - 17.432: 95.5718% ( 16) 00:35:15.998 17.432 - 17.554: 95.6551% ( 10) 00:35:15.998 17.554 - 17.676: 95.7217% ( 8) 00:35:15.998 17.676 - 17.798: 95.8299% ( 13) 00:35:15.998 17.798 - 17.920: 95.9630% ( 16) 00:35:15.998 17.920 - 18.042: 96.0962% ( 16) 00:35:15.998 18.042 - 18.164: 96.2544% ( 19) 00:35:15.998 18.164 - 18.286: 96.3626% ( 13) 00:35:15.998 18.286 - 18.408: 96.5041% ( 17) 00:35:15.998 18.408 - 18.530: 96.5540% ( 6) 00:35:15.998 18.530 - 18.651: 96.6706% ( 14) 00:35:15.998 18.651 - 18.773: 96.7122% ( 5) 00:35:15.998 18.773 - 18.895: 96.7704% ( 7) 00:35:15.998 18.895 - 19.017: 96.7954% ( 3) 00:35:15.998 19.017 - 19.139: 96.8703% ( 9) 00:35:15.998 19.139 - 19.261: 96.9203% ( 6) 00:35:15.999 19.261 - 19.383: 96.9868% ( 8) 00:35:15.999 19.383 - 19.505: 97.0618% ( 9) 00:35:15.999 19.505 - 19.627: 97.0784% ( 2) 00:35:15.999 19.627 - 19.749: 97.0951% ( 2) 00:35:15.999 19.749 - 19.870: 97.1450% ( 6) 00:35:15.999 19.870 - 19.992: 97.2116% ( 8) 00:35:15.999 19.992 - 20.114: 97.3198% ( 13) 00:35:15.999 20.114 - 20.236: 97.3864% ( 8) 00:35:15.999 20.236 - 20.358: 97.4446% ( 7) 00:35:15.999 20.358 - 20.480: 97.5279% ( 10) 00:35:15.999 20.480 - 20.602: 97.5695% ( 5) 00:35:15.999 20.602 - 20.724: 97.6278% ( 7) 00:35:15.999 20.724 - 20.846: 97.6860% ( 7) 00:35:15.999 20.846 - 20.968: 97.7443% ( 7) 00:35:15.999 20.968 - 21.090: 97.7609% ( 2) 00:35:15.999 21.090 - 21.211: 97.8275% ( 8) 00:35:15.999 21.211 - 21.333: 97.9274% ( 12) 00:35:15.999 21.333 - 21.455: 97.9690% ( 5) 00:35:15.999 21.455 - 21.577: 98.0523% ( 10) 00:35:15.999 21.577 - 21.699: 98.1189% ( 8) 00:35:15.999 21.699 - 21.821: 98.1522% ( 4) 00:35:15.999 21.943 - 22.065: 98.2271% ( 9) 00:35:15.999 22.065 - 22.187: 98.2937% ( 8) 00:35:15.999 22.187 - 22.309: 98.3020% ( 1) 00:35:15.999 22.309 - 22.430: 98.3270% ( 3) 00:35:15.999 22.430 - 22.552: 98.3353% ( 1) 00:35:15.999 22.552 - 22.674: 98.3519% ( 2) 00:35:15.999 22.674 - 22.796: 98.3769% ( 3) 00:35:15.999 22.796 - 22.918: 98.3852% ( 1) 00:35:15.999 22.918 - 23.040: 98.4019% ( 2) 00:35:15.999 23.162 - 23.284: 98.4185% ( 2) 00:35:15.999 23.284 - 23.406: 98.4268% ( 1) 00:35:15.999 23.406 - 23.528: 98.4352% ( 1) 00:35:15.999 23.528 - 23.650: 98.4435% ( 1) 00:35:15.999 23.650 - 23.771: 98.4601% ( 2) 00:35:15.999 23.771 - 23.893: 98.4685% ( 1) 00:35:15.999 23.893 - 24.015: 98.4768% ( 1) 00:35:15.999 24.015 - 24.137: 98.4934% ( 2) 00:35:15.999 24.137 - 24.259: 98.5017% ( 1) 00:35:15.999 24.259 - 24.381: 98.5267% ( 3) 00:35:15.999 24.381 - 24.503: 98.5434% ( 2) 00:35:15.999 24.503 - 24.625: 98.5850% ( 5) 00:35:15.999 24.625 - 24.747: 98.6266% ( 5) 00:35:15.999 24.747 - 24.869: 98.6599% ( 4) 00:35:15.999 24.869 - 24.990: 98.7265% ( 8) 00:35:15.999 24.990 - 25.112: 98.8264% ( 12) 00:35:15.999 25.112 - 25.234: 98.8930% ( 8) 00:35:15.999 25.234 - 25.356: 99.0012% ( 13) 00:35:15.999 25.356 - 25.478: 99.0761% ( 9) 00:35:15.999 25.478 - 25.600: 99.1343% ( 7) 00:35:15.999 25.600 - 25.722: 99.2342% ( 12) 00:35:15.999 25.722 - 25.844: 99.3091% ( 9) 00:35:15.999 25.844 - 25.966: 99.3341% ( 3) 00:35:15.999 25.966 - 26.088: 99.3757% ( 5) 00:35:15.999 26.088 - 26.210: 99.3841% ( 1) 00:35:15.999 26.210 - 26.331: 99.4090% ( 3) 00:35:15.999 26.331 - 26.453: 99.4173% ( 1) 00:35:15.999 26.453 - 26.575: 99.4257% ( 1) 00:35:15.999 26.697 - 26.819: 99.4340% ( 1) 00:35:15.999 26.819 - 26.941: 99.4423% ( 1) 00:35:15.999 26.941 - 27.063: 99.4506% ( 1) 00:35:15.999 27.063 - 27.185: 99.4673% ( 2) 00:35:15.999 27.185 - 27.307: 99.4756% ( 1) 00:35:15.999 27.429 - 27.550: 99.4839% ( 1) 00:35:15.999 27.672 - 27.794: 99.4923% ( 1) 00:35:15.999 28.404 - 28.526: 99.5006% ( 1) 00:35:15.999 28.770 - 28.891: 99.5089% ( 1) 00:35:15.999 29.013 - 29.135: 99.5172% ( 1) 00:35:15.999 29.257 - 29.379: 99.5256% ( 1) 00:35:15.999 29.379 - 29.501: 99.5339% ( 1) 00:35:15.999 29.745 - 29.867: 99.5422% ( 1) 00:35:15.999 29.867 - 29.989: 99.5505% ( 1) 00:35:15.999 29.989 - 30.110: 99.5755% ( 3) 00:35:15.999 30.110 - 30.232: 99.5921% ( 2) 00:35:15.999 30.232 - 30.354: 99.6254% ( 4) 00:35:15.999 30.354 - 30.476: 99.6421% ( 2) 00:35:15.999 30.476 - 30.598: 99.6837% ( 5) 00:35:15.999 30.598 - 30.720: 99.7087% ( 3) 00:35:15.999 30.720 - 30.842: 99.7253% ( 2) 00:35:15.999 30.964 - 31.086: 99.7503% ( 3) 00:35:15.999 31.086 - 31.208: 99.7836% ( 4) 00:35:15.999 31.208 - 31.451: 99.8169% ( 4) 00:35:15.999 31.451 - 31.695: 99.8335% ( 2) 00:35:15.999 31.939 - 32.183: 99.8419% ( 1) 00:35:15.999 32.183 - 32.427: 99.8585% ( 2) 00:35:15.999 32.427 - 32.670: 99.8668% ( 1) 00:35:15.999 32.670 - 32.914: 99.8751% ( 1) 00:35:15.999 32.914 - 33.158: 99.8835% ( 1) 00:35:15.999 35.352 - 35.596: 99.8918% ( 1) 00:35:15.999 35.840 - 36.084: 99.9001% ( 1) 00:35:15.999 37.303 - 37.547: 99.9084% ( 1) 00:35:15.999 38.034 - 38.278: 99.9168% ( 1) 00:35:15.999 38.766 - 39.010: 99.9251% ( 1) 00:35:15.999 41.691 - 41.935: 99.9334% ( 1) 00:35:15.999 41.935 - 42.179: 99.9417% ( 1) 00:35:15.999 42.179 - 42.423: 99.9501% ( 1) 00:35:15.999 46.324 - 46.568: 99.9584% ( 1) 00:35:15.999 47.543 - 47.787: 99.9667% ( 1) 00:35:15.999 48.518 - 48.762: 99.9750% ( 1) 00:35:15.999 50.469 - 50.712: 99.9834% ( 1) 00:35:15.999 105.326 - 105.813: 99.9917% ( 1) 00:35:15.999 127.756 - 128.731: 100.0000% ( 1) 00:35:15.999 00:35:15.999 Complete histogram 00:35:15.999 ================== 00:35:15.999 Range in us Cumulative Count 00:35:15.999 7.802 - 7.863: 0.1415% ( 17) 00:35:15.999 7.863 - 7.924: 0.5161% ( 45) 00:35:15.999 7.924 - 7.985: 0.7242% ( 25) 00:35:15.999 7.985 - 8.046: 0.8990% ( 21) 00:35:15.999 8.046 - 8.107: 0.9572% ( 7) 00:35:15.999 8.107 - 8.168: 1.0405% ( 10) 00:35:15.999 8.168 - 8.229: 1.1070% ( 8) 00:35:15.999 8.229 - 8.290: 1.1237% ( 2) 00:35:15.999 8.472 - 8.533: 1.9644% ( 101) 00:35:15.999 8.533 - 8.594: 7.4746% ( 662) 00:35:15.999 8.594 - 8.655: 14.1751% ( 805) 00:35:15.999 8.655 - 8.716: 18.1705% ( 480) 00:35:15.999 8.716 - 8.777: 19.8518% ( 202) 00:35:15.999 8.777 - 8.838: 20.7425% ( 107) 00:35:15.999 8.838 - 8.899: 21.5249% ( 94) 00:35:15.999 8.899 - 8.960: 22.2157% ( 83) 00:35:15.999 8.960 - 9.021: 22.6236% ( 49) 00:35:15.999 9.021 - 9.082: 23.8888% ( 152) 00:35:15.999 9.082 - 9.143: 34.9259% ( 1326) 00:35:15.999 9.143 - 9.204: 53.7706% ( 2264) 00:35:15.999 9.204 - 9.265: 65.6817% ( 1431) 00:35:15.999 9.265 - 9.326: 71.4000% ( 687) 00:35:15.999 9.326 - 9.387: 74.2217% ( 339) 00:35:15.999 9.387 - 9.448: 77.9008% ( 442) 00:35:15.999 9.448 - 9.509: 83.4943% ( 672) 00:35:15.999 9.509 - 9.570: 87.6561% ( 500) 00:35:15.999 9.570 - 9.630: 89.7952% ( 257) 00:35:15.999 9.630 - 9.691: 90.7857% ( 119) 00:35:15.999 9.691 - 9.752: 91.4600% ( 81) 00:35:15.999 9.752 - 9.813: 92.1425% ( 82) 00:35:15.999 9.813 - 9.874: 92.7418% ( 72) 00:35:15.999 9.874 - 9.935: 93.1580% ( 50) 00:35:15.999 9.935 - 9.996: 93.5991% ( 53) 00:35:15.999 9.996 - 10.057: 93.8155% ( 26) 00:35:15.999 10.057 - 10.118: 93.9903% ( 21) 00:35:15.999 10.118 - 10.179: 94.1069% ( 14) 00:35:15.999 10.179 - 10.240: 94.2068% ( 12) 00:35:15.999 10.240 - 10.301: 94.2817% ( 9) 00:35:15.999 10.301 - 10.362: 94.3649% ( 10) 00:35:15.999 10.362 - 10.423: 94.4065% ( 5) 00:35:15.999 10.423 - 10.484: 94.5064% ( 12) 00:35:15.999 10.484 - 10.545: 94.5980% ( 11) 00:35:15.999 10.545 - 10.606: 94.6396% ( 5) 00:35:15.999 10.606 - 10.667: 94.6979% ( 7) 00:35:15.999 10.667 - 10.728: 94.7894% ( 11) 00:35:15.999 10.728 - 10.789: 94.8310% ( 5) 00:35:15.999 10.789 - 10.850: 94.9725% ( 17) 00:35:16.000 10.850 - 10.910: 95.0974% ( 15) 00:35:16.000 10.910 - 10.971: 95.4470% ( 42) 00:35:16.000 10.971 - 11.032: 95.7383% ( 35) 00:35:16.000 11.032 - 11.093: 95.9048% ( 20) 00:35:16.000 11.093 - 11.154: 96.0796% ( 21) 00:35:16.000 11.154 - 11.215: 96.1545% ( 9) 00:35:16.000 11.215 - 11.276: 96.2044% ( 6) 00:35:16.000 11.276 - 11.337: 96.2627% ( 7) 00:35:16.000 11.337 - 11.398: 96.3459% ( 10) 00:35:16.000 11.398 - 11.459: 96.4208% ( 9) 00:35:16.000 11.459 - 11.520: 96.4625% ( 5) 00:35:16.000 11.520 - 11.581: 96.4958% ( 4) 00:35:16.000 11.581 - 11.642: 96.5290% ( 4) 00:35:16.000 11.642 - 11.703: 96.5707% ( 5) 00:35:16.000 11.703 - 11.764: 96.6123% ( 5) 00:35:16.000 11.764 - 11.825: 96.6456% ( 4) 00:35:16.000 11.825 - 11.886: 96.7038% ( 7) 00:35:16.000 11.886 - 11.947: 96.7371% ( 4) 00:35:16.000 11.947 - 12.008: 96.7621% ( 3) 00:35:16.000 12.008 - 12.069: 96.7788% ( 2) 00:35:16.000 12.069 - 12.130: 96.8121% ( 4) 00:35:16.000 12.130 - 12.190: 96.8287% ( 2) 00:35:16.000 12.190 - 12.251: 96.8703% ( 5) 00:35:16.000 12.251 - 12.312: 96.9036% ( 4) 00:35:16.000 12.312 - 12.373: 96.9536% ( 6) 00:35:16.000 12.373 - 12.434: 96.9702% ( 2) 00:35:16.000 12.434 - 12.495: 97.0118% ( 5) 00:35:16.000 12.495 - 12.556: 97.0285% ( 2) 00:35:16.000 12.556 - 12.617: 97.0534% ( 3) 00:35:16.000 12.617 - 12.678: 97.0951% ( 5) 00:35:16.000 12.678 - 12.739: 97.1200% ( 3) 00:35:16.000 12.739 - 12.800: 97.1367% ( 2) 00:35:16.000 12.800 - 12.861: 97.1616% ( 3) 00:35:16.000 12.861 - 12.922: 97.1783% ( 2) 00:35:16.000 12.922 - 12.983: 97.2116% ( 4) 00:35:16.000 12.983 - 13.044: 97.2366% ( 3) 00:35:16.000 13.044 - 13.105: 97.2782% ( 5) 00:35:16.000 13.105 - 13.166: 97.3031% ( 3) 00:35:16.000 13.166 - 13.227: 97.3531% ( 6) 00:35:16.000 13.227 - 13.288: 97.3697% ( 2) 00:35:16.000 13.288 - 13.349: 97.3947% ( 3) 00:35:16.000 13.349 - 13.410: 97.4030% ( 1) 00:35:16.000 13.410 - 13.470: 97.4197% ( 2) 00:35:16.000 13.470 - 13.531: 97.4613% ( 5) 00:35:16.000 13.531 - 13.592: 97.4863% ( 3) 00:35:16.000 13.592 - 13.653: 97.5196% ( 4) 00:35:16.000 13.653 - 13.714: 97.5362% ( 2) 00:35:16.000 13.714 - 13.775: 97.5612% ( 3) 00:35:16.000 13.775 - 13.836: 97.5861% ( 3) 00:35:16.000 13.836 - 13.897: 97.6111% ( 3) 00:35:16.000 13.897 - 13.958: 97.6278% ( 2) 00:35:16.000 13.958 - 14.019: 97.6444% ( 2) 00:35:16.000 14.019 - 14.080: 97.6944% ( 6) 00:35:16.000 14.080 - 14.141: 97.7110% ( 2) 00:35:16.000 14.141 - 14.202: 97.7360% ( 3) 00:35:16.000 14.202 - 14.263: 97.7693% ( 4) 00:35:16.000 14.263 - 14.324: 97.7942% ( 3) 00:35:16.000 14.324 - 14.385: 97.8442% ( 6) 00:35:16.000 14.385 - 14.446: 97.9024% ( 7) 00:35:16.000 14.446 - 14.507: 97.9524% ( 6) 00:35:16.000 14.507 - 14.568: 97.9607% ( 1) 00:35:16.000 14.568 - 14.629: 97.9857% ( 3) 00:35:16.000 14.629 - 14.690: 97.9940% ( 1) 00:35:16.000 14.750 - 14.811: 98.0023% ( 1) 00:35:16.000 14.811 - 14.872: 98.0107% ( 1) 00:35:16.000 14.872 - 14.933: 98.0439% ( 4) 00:35:16.000 14.933 - 14.994: 98.0523% ( 1) 00:35:16.000 14.994 - 15.055: 98.0606% ( 1) 00:35:16.000 15.055 - 15.116: 98.0689% ( 1) 00:35:16.000 15.116 - 15.177: 98.0772% ( 1) 00:35:16.000 15.177 - 15.238: 98.1022% ( 3) 00:35:16.000 15.238 - 15.299: 98.1189% ( 2) 00:35:16.000 15.299 - 15.360: 98.1272% ( 1) 00:35:16.000 15.360 - 15.421: 98.1438% ( 2) 00:35:16.000 15.421 - 15.482: 98.1771% ( 4) 00:35:16.000 15.482 - 15.543: 98.1855% ( 1) 00:35:16.000 15.543 - 15.604: 98.1938% ( 1) 00:35:16.000 15.604 - 15.726: 98.2021% ( 1) 00:35:16.000 15.726 - 15.848: 98.2354% ( 4) 00:35:16.000 15.848 - 15.970: 98.2437% ( 1) 00:35:16.000 15.970 - 16.091: 98.2520% ( 1) 00:35:16.000 16.091 - 16.213: 98.2687% ( 2) 00:35:16.000 16.213 - 16.335: 98.2853% ( 2) 00:35:16.000 16.335 - 16.457: 98.3103% ( 3) 00:35:16.000 16.457 - 16.579: 98.3270% ( 2) 00:35:16.000 16.579 - 16.701: 98.3686% ( 5) 00:35:16.000 16.823 - 16.945: 98.3769% ( 1) 00:35:16.000 16.945 - 17.067: 98.4102% ( 4) 00:35:16.000 17.067 - 17.189: 98.4185% ( 1) 00:35:16.000 17.189 - 17.310: 98.4268% ( 1) 00:35:16.000 17.310 - 17.432: 98.4435% ( 2) 00:35:16.000 17.432 - 17.554: 98.4601% ( 2) 00:35:16.000 17.554 - 17.676: 98.4768% ( 2) 00:35:16.000 18.042 - 18.164: 98.4934% ( 2) 00:35:16.000 18.164 - 18.286: 98.5101% ( 2) 00:35:16.000 18.286 - 18.408: 98.5184% ( 1) 00:35:16.000 18.408 - 18.530: 98.5267% ( 1) 00:35:16.000 18.651 - 18.773: 98.5350% ( 1) 00:35:16.000 18.773 - 18.895: 98.5434% ( 1) 00:35:16.000 19.383 - 19.505: 98.5517% ( 1) 00:35:16.000 19.505 - 19.627: 98.5850% ( 4) 00:35:16.000 19.627 - 19.749: 98.6349% ( 6) 00:35:16.000 19.749 - 19.870: 98.7598% ( 15) 00:35:16.000 19.870 - 19.992: 98.8430% ( 10) 00:35:16.000 19.992 - 20.114: 98.9179% ( 9) 00:35:16.000 20.114 - 20.236: 98.9845% ( 8) 00:35:16.000 20.236 - 20.358: 99.1094% ( 15) 00:35:16.000 20.358 - 20.480: 99.2758% ( 20) 00:35:16.000 20.480 - 20.602: 99.3591% ( 10) 00:35:16.000 20.602 - 20.724: 99.4007% ( 5) 00:35:16.000 20.724 - 20.846: 99.4257% ( 3) 00:35:16.000 20.846 - 20.968: 99.4506% ( 3) 00:35:16.000 20.968 - 21.090: 99.4673% ( 2) 00:35:16.000 21.090 - 21.211: 99.4756% ( 1) 00:35:16.000 21.333 - 21.455: 99.4839% ( 1) 00:35:16.000 21.821 - 21.943: 99.4923% ( 1) 00:35:16.000 22.065 - 22.187: 99.5006% ( 1) 00:35:16.000 22.430 - 22.552: 99.5172% ( 2) 00:35:16.000 23.162 - 23.284: 99.5256% ( 1) 00:35:16.000 23.284 - 23.406: 99.5339% ( 1) 00:35:16.000 23.893 - 24.015: 99.5422% ( 1) 00:35:16.000 24.015 - 24.137: 99.5672% ( 3) 00:35:16.000 24.625 - 24.747: 99.5838% ( 2) 00:35:16.000 24.747 - 24.869: 99.6088% ( 3) 00:35:16.000 24.869 - 24.990: 99.6338% ( 3) 00:35:16.000 24.990 - 25.112: 99.6587% ( 3) 00:35:16.000 25.112 - 25.234: 99.7087% ( 6) 00:35:16.000 25.234 - 25.356: 99.7253% ( 2) 00:35:16.000 25.356 - 25.478: 99.7669% ( 5) 00:35:16.000 25.478 - 25.600: 99.7919% ( 3) 00:35:16.000 25.600 - 25.722: 99.8169% ( 3) 00:35:16.000 25.722 - 25.844: 99.8419% ( 3) 00:35:16.000 25.844 - 25.966: 99.8502% ( 1) 00:35:16.000 26.088 - 26.210: 99.8585% ( 1) 00:35:16.000 26.210 - 26.331: 99.8668% ( 1) 00:35:16.000 26.331 - 26.453: 99.8751% ( 1) 00:35:16.000 27.550 - 27.672: 99.8835% ( 1) 00:35:16.000 29.379 - 29.501: 99.8918% ( 1) 00:35:16.000 29.745 - 29.867: 99.9001% ( 1) 00:35:16.001 30.598 - 30.720: 99.9084% ( 1) 00:35:16.001 33.646 - 33.890: 99.9168% ( 1) 00:35:16.001 34.377 - 34.621: 99.9251% ( 1) 00:35:16.001 38.766 - 39.010: 99.9334% ( 1) 00:35:16.001 41.691 - 41.935: 99.9417% ( 1) 00:35:16.001 45.836 - 46.080: 99.9501% ( 1) 00:35:16.001 55.101 - 55.345: 99.9584% ( 1) 00:35:16.001 55.345 - 55.589: 99.9667% ( 1) 00:35:16.001 66.316 - 66.804: 99.9750% ( 1) 00:35:16.001 76.556 - 77.044: 99.9834% ( 1) 00:35:16.001 84.358 - 84.846: 99.9917% ( 1) 00:35:16.001 186.270 - 187.246: 100.0000% ( 1) 00:35:16.001 00:35:16.001 ************************************ 00:35:16.001 END TEST nvme_overhead 00:35:16.001 ************************************ 00:35:16.001 00:35:16.001 real 0m1.354s 00:35:16.001 user 0m1.124s 00:35:16.001 sys 0m0.148s 00:35:16.001 00:49:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:16.001 00:49:09 -- common/autotest_common.sh@10 -- # set +x 00:35:16.001 00:49:09 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:35:16.001 00:49:09 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:35:16.001 00:49:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:16.001 00:49:09 -- common/autotest_common.sh@10 -- # set +x 00:35:16.288 ************************************ 00:35:16.288 START TEST nvme_arbitration 00:35:16.288 ************************************ 00:35:16.288 00:49:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:35:19.570 Initializing NVMe Controllers 00:35:19.570 Attached to 0000:00:10.0 00:35:19.570 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:35:19.570 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:35:19.570 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:35:19.570 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:35:19.570 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:35:19.570 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:35:19.570 Initialization complete. Launching workers. 00:35:19.570 Starting thread on core 1 with urgent priority queue 00:35:19.570 Starting thread on core 2 with urgent priority queue 00:35:19.570 Starting thread on core 3 with urgent priority queue 00:35:19.570 Starting thread on core 0 with urgent priority queue 00:35:19.570 QEMU NVMe Ctrl (12340 ) core 0: 832.00 IO/s 120.19 secs/100000 ios 00:35:19.570 QEMU NVMe Ctrl (12340 ) core 1: 832.00 IO/s 120.19 secs/100000 ios 00:35:19.570 QEMU NVMe Ctrl (12340 ) core 2: 448.00 IO/s 223.21 secs/100000 ios 00:35:19.570 QEMU NVMe Ctrl (12340 ) core 3: 362.67 IO/s 275.74 secs/100000 ios 00:35:19.570 ======================================================== 00:35:19.570 00:35:19.570 ************************************ 00:35:19.570 END TEST nvme_arbitration 00:35:19.570 ************************************ 00:35:19.570 00:35:19.570 real 0m3.538s 00:35:19.570 user 0m9.549s 00:35:19.570 sys 0m0.152s 00:35:19.570 00:49:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:19.570 00:49:13 -- common/autotest_common.sh@10 -- # set +x 00:35:19.828 00:49:13 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:35:19.828 00:49:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:35:19.828 00:49:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:19.828 00:49:13 -- common/autotest_common.sh@10 -- # set +x 00:35:19.828 ************************************ 00:35:19.828 START TEST nvme_single_aen 00:35:19.828 ************************************ 00:35:19.828 00:49:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:35:20.086 Asynchronous Event Request test 00:35:20.086 Attached to 0000:00:10.0 00:35:20.086 Reset controller to setup AER completions for this process 00:35:20.086 Registering asynchronous event callbacks... 00:35:20.086 Getting orig temperature thresholds of all controllers 00:35:20.086 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:35:20.086 Setting all controllers temperature threshold low to trigger AER 00:35:20.086 Waiting for all controllers temperature threshold to be set lower 00:35:20.086 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:35:20.086 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:35:20.086 Waiting for all controllers to trigger AER and reset threshold 00:35:20.086 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:35:20.086 Cleaning up... 00:35:20.086 ************************************ 00:35:20.086 END TEST nvme_single_aen 00:35:20.086 ************************************ 00:35:20.086 00:35:20.086 real 0m0.323s 00:35:20.086 user 0m0.102s 00:35:20.086 sys 0m0.143s 00:35:20.086 00:49:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:20.086 00:49:13 -- common/autotest_common.sh@10 -- # set +x 00:35:20.086 00:49:13 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:35:20.086 00:49:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:20.087 00:49:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:20.087 00:49:13 -- common/autotest_common.sh@10 -- # set +x 00:35:20.087 ************************************ 00:35:20.087 START TEST nvme_doorbell_aers 00:35:20.087 ************************************ 00:35:20.087 00:49:13 -- common/autotest_common.sh@1111 -- # nvme_doorbell_aers 00:35:20.087 00:49:13 -- nvme/nvme.sh@70 -- # bdfs=() 00:35:20.087 00:49:13 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:35:20.087 00:49:13 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:35:20.087 00:49:13 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:35:20.087 00:49:13 -- common/autotest_common.sh@1499 -- # bdfs=() 00:35:20.087 00:49:13 -- common/autotest_common.sh@1499 -- # local bdfs 00:35:20.087 00:49:13 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:20.087 00:49:13 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:35:20.087 00:49:13 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:20.345 00:49:13 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:35:20.345 00:49:13 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:35:20.345 00:49:13 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:35:20.345 00:49:13 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:35:20.639 [2024-04-24 00:49:14.272448] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149671) is not found. Dropping the request. 00:35:30.618 Executing: test_write_invalid_db 00:35:30.618 Waiting for AER completion... 00:35:30.618 Failure: test_write_invalid_db 00:35:30.618 00:35:30.618 Executing: test_invalid_db_write_overflow_sq 00:35:30.618 Waiting for AER completion... 00:35:30.618 Failure: test_invalid_db_write_overflow_sq 00:35:30.618 00:35:30.618 Executing: test_invalid_db_write_overflow_cq 00:35:30.618 Waiting for AER completion... 00:35:30.618 Failure: test_invalid_db_write_overflow_cq 00:35:30.618 00:35:30.618 ************************************ 00:35:30.618 END TEST nvme_doorbell_aers 00:35:30.618 ************************************ 00:35:30.618 00:35:30.618 real 0m10.125s 00:35:30.618 user 0m7.399s 00:35:30.618 sys 0m2.665s 00:35:30.618 00:49:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:30.618 00:49:23 -- common/autotest_common.sh@10 -- # set +x 00:35:30.618 00:49:24 -- nvme/nvme.sh@97 -- # uname 00:35:30.618 00:49:24 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:35:30.618 00:49:24 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:35:30.618 00:49:24 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:35:30.618 00:49:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:30.618 00:49:24 -- common/autotest_common.sh@10 -- # set +x 00:35:30.618 ************************************ 00:35:30.618 START TEST nvme_multi_aen 00:35:30.618 ************************************ 00:35:30.618 00:49:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:35:30.618 [2024-04-24 00:49:24.378311] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149671) is not found. Dropping the request. 00:35:30.618 [2024-04-24 00:49:24.378740] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149671) is not found. Dropping the request. 00:35:30.618 [2024-04-24 00:49:24.378916] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149671) is not found. Dropping the request. 00:35:30.618 Child process pid: 149868 00:35:31.186 [Child] Asynchronous Event Request test 00:35:31.186 [Child] Attached to 0000:00:10.0 00:35:31.186 [Child] Registering asynchronous event callbacks... 00:35:31.186 [Child] Getting orig temperature thresholds of all controllers 00:35:31.186 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:35:31.186 [Child] Waiting for all controllers to trigger AER and reset threshold 00:35:31.186 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:35:31.186 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:35:31.186 [Child] Cleaning up... 00:35:31.186 Asynchronous Event Request test 00:35:31.186 Attached to 0000:00:10.0 00:35:31.186 Reset controller to setup AER completions for this process 00:35:31.186 Registering asynchronous event callbacks... 00:35:31.186 Getting orig temperature thresholds of all controllers 00:35:31.186 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:35:31.186 Setting all controllers temperature threshold low to trigger AER 00:35:31.186 Waiting for all controllers temperature threshold to be set lower 00:35:31.186 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:35:31.186 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:35:31.186 Waiting for all controllers to trigger AER and reset threshold 00:35:31.186 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:35:31.186 Cleaning up... 00:35:31.186 ************************************ 00:35:31.186 END TEST nvme_multi_aen 00:35:31.186 ************************************ 00:35:31.186 00:35:31.186 real 0m0.755s 00:35:31.186 user 0m0.248s 00:35:31.186 sys 0m0.304s 00:35:31.186 00:49:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:31.186 00:49:24 -- common/autotest_common.sh@10 -- # set +x 00:35:31.186 00:49:24 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:35:31.186 00:49:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:35:31.186 00:49:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:31.186 00:49:24 -- common/autotest_common.sh@10 -- # set +x 00:35:31.186 ************************************ 00:35:31.186 START TEST nvme_startup 00:35:31.186 ************************************ 00:35:31.186 00:49:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:35:31.754 Initializing NVMe Controllers 00:35:31.754 Attached to 0000:00:10.0 00:35:31.754 Initialization complete. 00:35:31.754 Time used:243573.625 (us). 00:35:31.754 00:35:31.754 real 0m0.405s 00:35:31.754 user 0m0.157s 00:35:31.754 sys 0m0.146s 00:35:31.754 00:49:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:31.754 00:49:25 -- common/autotest_common.sh@10 -- # set +x 00:35:31.754 ************************************ 00:35:31.754 END TEST nvme_startup 00:35:31.754 ************************************ 00:35:31.754 00:49:25 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:35:31.754 00:49:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:31.754 00:49:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:31.754 00:49:25 -- common/autotest_common.sh@10 -- # set +x 00:35:31.754 ************************************ 00:35:31.754 START TEST nvme_multi_secondary 00:35:31.754 ************************************ 00:35:31.754 00:49:25 -- common/autotest_common.sh@1111 -- # nvme_multi_secondary 00:35:31.754 00:49:25 -- nvme/nvme.sh@52 -- # pid0=149941 00:35:31.754 00:49:25 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:35:31.754 00:49:25 -- nvme/nvme.sh@54 -- # pid1=149942 00:35:31.754 00:49:25 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:35:31.754 00:49:25 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:35:35.937 Initializing NVMe Controllers 00:35:35.937 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:35.937 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:35:35.937 Initialization complete. Launching workers. 00:35:35.937 ======================================================== 00:35:35.937 Latency(us) 00:35:35.937 Device Information : IOPS MiB/s Average min max 00:35:35.937 PCIE (0000:00:10.0) NSID 1 from core 2: 12719.65 49.69 1257.67 175.69 18122.75 00:35:35.937 ======================================================== 00:35:35.937 Total : 12719.65 49.69 1257.67 175.69 18122.75 00:35:35.937 00:35:35.937 00:49:29 -- nvme/nvme.sh@56 -- # wait 149941 00:35:35.937 Initializing NVMe Controllers 00:35:35.937 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:35.937 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:35:35.937 Initialization complete. Launching workers. 00:35:35.937 ======================================================== 00:35:35.937 Latency(us) 00:35:35.937 Device Information : IOPS MiB/s Average min max 00:35:35.937 PCIE (0000:00:10.0) NSID 1 from core 1: 30138.67 117.73 530.51 169.14 2246.59 00:35:35.937 ======================================================== 00:35:35.937 Total : 30138.67 117.73 530.51 169.14 2246.59 00:35:35.937 00:35:37.312 Initializing NVMe Controllers 00:35:37.312 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:37.312 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:35:37.312 Initialization complete. Launching workers. 00:35:37.312 ======================================================== 00:35:37.312 Latency(us) 00:35:37.312 Device Information : IOPS MiB/s Average min max 00:35:37.312 PCIE (0000:00:10.0) NSID 1 from core 0: 37993.40 148.41 420.78 150.75 2743.48 00:35:37.312 ======================================================== 00:35:37.312 Total : 37993.40 148.41 420.78 150.75 2743.48 00:35:37.312 00:35:37.312 00:49:30 -- nvme/nvme.sh@57 -- # wait 149942 00:35:37.312 00:49:30 -- nvme/nvme.sh@61 -- # pid0=150021 00:35:37.312 00:49:30 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:35:37.312 00:49:30 -- nvme/nvme.sh@63 -- # pid1=150022 00:35:37.312 00:49:30 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:35:37.312 00:49:30 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:35:40.596 Initializing NVMe Controllers 00:35:40.596 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:40.596 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:35:40.596 Initialization complete. Launching workers. 00:35:40.596 ======================================================== 00:35:40.596 Latency(us) 00:35:40.596 Device Information : IOPS MiB/s Average min max 00:35:40.596 PCIE (0000:00:10.0) NSID 1 from core 0: 30474.67 119.04 524.67 173.24 5346.33 00:35:40.596 ======================================================== 00:35:40.596 Total : 30474.67 119.04 524.67 173.24 5346.33 00:35:40.596 00:35:40.853 Initializing NVMe Controllers 00:35:40.853 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:40.853 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:35:40.853 Initialization complete. Launching workers. 00:35:40.853 ======================================================== 00:35:40.853 Latency(us) 00:35:40.853 Device Information : IOPS MiB/s Average min max 00:35:40.853 PCIE (0000:00:10.0) NSID 1 from core 1: 30645.29 119.71 521.76 171.85 5382.30 00:35:40.853 ======================================================== 00:35:40.853 Total : 30645.29 119.71 521.76 171.85 5382.30 00:35:40.853 00:35:42.755 Initializing NVMe Controllers 00:35:42.755 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:35:42.755 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:35:42.755 Initialization complete. Launching workers. 00:35:42.755 ======================================================== 00:35:42.755 Latency(us) 00:35:42.755 Device Information : IOPS MiB/s Average min max 00:35:42.755 PCIE (0000:00:10.0) NSID 1 from core 2: 16345.27 63.85 978.00 163.97 24858.62 00:35:42.755 ======================================================== 00:35:42.755 Total : 16345.27 63.85 978.00 163.97 24858.62 00:35:42.755 00:35:42.755 00:49:36 -- nvme/nvme.sh@65 -- # wait 150021 00:35:42.755 00:49:36 -- nvme/nvme.sh@66 -- # wait 150022 00:35:42.755 00:35:42.755 real 0m10.786s 00:35:42.755 user 0m18.834s 00:35:42.755 sys 0m0.952s 00:35:42.755 00:49:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:42.755 ************************************ 00:35:42.755 END TEST nvme_multi_secondary 00:35:42.755 ************************************ 00:35:42.755 00:49:36 -- common/autotest_common.sh@10 -- # set +x 00:35:42.755 00:49:36 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:35:42.755 00:49:36 -- nvme/nvme.sh@102 -- # kill_stub 00:35:42.755 00:49:36 -- common/autotest_common.sh@1075 -- # [[ -e /proc/149169 ]] 00:35:42.755 00:49:36 -- common/autotest_common.sh@1076 -- # kill 149169 00:35:42.755 00:49:36 -- common/autotest_common.sh@1077 -- # wait 149169 00:35:42.755 [2024-04-24 00:49:36.276625] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149867) is not found. Dropping the request. 00:35:42.755 [2024-04-24 00:49:36.276761] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149867) is not found. Dropping the request. 00:35:42.755 [2024-04-24 00:49:36.276814] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149867) is not found. Dropping the request. 00:35:42.755 [2024-04-24 00:49:36.276860] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149867) is not found. Dropping the request. 00:35:43.013 00:49:36 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:35:43.013 00:49:36 -- common/autotest_common.sh@1083 -- # echo 2 00:35:43.013 00:49:36 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:35:43.014 00:49:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:43.014 00:49:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:43.014 00:49:36 -- common/autotest_common.sh@10 -- # set +x 00:35:43.014 ************************************ 00:35:43.014 START TEST bdev_nvme_reset_stuck_adm_cmd 00:35:43.014 ************************************ 00:35:43.014 00:49:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:35:43.014 * Looking for test storage... 00:35:43.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:35:43.014 00:49:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:35:43.014 00:49:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:35:43.014 00:49:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:35:43.014 00:49:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:35:43.014 00:49:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:35:43.014 00:49:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:35:43.014 00:49:36 -- common/autotest_common.sh@1510 -- # bdfs=() 00:35:43.014 00:49:36 -- common/autotest_common.sh@1510 -- # local bdfs 00:35:43.014 00:49:36 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:35:43.014 00:49:36 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:35:43.014 00:49:36 -- common/autotest_common.sh@1499 -- # bdfs=() 00:35:43.014 00:49:36 -- common/autotest_common.sh@1499 -- # local bdfs 00:35:43.014 00:49:36 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:43.014 00:49:36 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:43.014 00:49:36 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:35:43.273 00:49:36 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:35:43.273 00:49:36 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:35:43.273 00:49:36 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:35:43.273 00:49:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:35:43.273 00:49:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:35:43.273 00:49:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=150178 00:35:43.273 00:49:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:35:43.273 00:49:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:35:43.273 00:49:36 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 150178 00:35:43.273 00:49:36 -- common/autotest_common.sh@817 -- # '[' -z 150178 ']' 00:35:43.273 00:49:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:43.273 00:49:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:43.273 00:49:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:43.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:43.273 00:49:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:43.273 00:49:36 -- common/autotest_common.sh@10 -- # set +x 00:35:43.273 [2024-04-24 00:49:36.910987] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:35:43.273 [2024-04-24 00:49:36.911191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150178 ] 00:35:43.531 [2024-04-24 00:49:37.142452] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:43.789 [2024-04-24 00:49:37.403896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.789 [2024-04-24 00:49:37.403996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:43.789 [2024-04-24 00:49:37.404102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:43.789 [2024-04-24 00:49:37.404388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.724 00:49:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:44.724 00:49:38 -- common/autotest_common.sh@850 -- # return 0 00:35:44.724 00:49:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:35:44.724 00:49:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:44.724 00:49:38 -- common/autotest_common.sh@10 -- # set +x 00:35:44.724 nvme0n1 00:35:44.724 00:49:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:44.724 00:49:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:35:44.983 00:49:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_V8naR.txt 00:35:44.983 00:49:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:35:44.983 00:49:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:44.983 00:49:38 -- common/autotest_common.sh@10 -- # set +x 00:35:44.983 true 00:35:44.983 00:49:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:44.983 00:49:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:35:44.983 00:49:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1713919778 00:35:44.983 00:49:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=150204 00:35:44.983 00:49:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:35:44.983 00:49:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:35:44.983 00:49:38 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:35:46.886 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:35:46.886 00:49:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:46.886 00:49:40 -- common/autotest_common.sh@10 -- # set +x 00:35:46.886 [2024-04-24 00:49:40.545624] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:35:46.886 [2024-04-24 00:49:40.546106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:46.886 [2024-04-24 00:49:40.546240] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:35:46.886 [2024-04-24 00:49:40.546373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:46.886 [2024-04-24 00:49:40.548367] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:46.886 00:49:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:46.886 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 150204 00:35:46.886 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 150204 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 150204 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:46.887 00:49:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:46.887 00:49:40 -- common/autotest_common.sh@10 -- # set +x 00:35:46.887 00:49:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_V8naR.txt 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_V8naR.txt 00:35:46.887 00:49:40 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 150178 00:35:46.887 00:49:40 -- common/autotest_common.sh@936 -- # '[' -z 150178 ']' 00:35:46.887 00:49:40 -- common/autotest_common.sh@940 -- # kill -0 150178 00:35:46.887 00:49:40 -- common/autotest_common.sh@941 -- # uname 00:35:46.887 00:49:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:46.887 00:49:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150178 00:35:46.887 00:49:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:46.887 00:49:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:46.887 00:49:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150178' 00:35:46.887 killing process with pid 150178 00:35:46.887 00:49:40 -- common/autotest_common.sh@955 -- # kill 150178 00:35:46.887 00:49:40 -- common/autotest_common.sh@960 -- # wait 150178 00:35:50.169 00:49:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:35:50.169 00:49:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:35:50.169 00:35:50.169 real 0m6.736s 00:35:50.169 user 0m23.491s 00:35:50.169 sys 0m0.689s 00:35:50.169 00:49:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:50.169 00:49:43 -- common/autotest_common.sh@10 -- # set +x 00:35:50.169 ************************************ 00:35:50.169 END TEST bdev_nvme_reset_stuck_adm_cmd 00:35:50.169 ************************************ 00:35:50.169 00:49:43 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:35:50.169 00:49:43 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:35:50.169 00:49:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:50.169 00:49:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:50.169 00:49:43 -- common/autotest_common.sh@10 -- # set +x 00:35:50.169 ************************************ 00:35:50.169 START TEST nvme_fio 00:35:50.169 ************************************ 00:35:50.169 00:49:43 -- common/autotest_common.sh@1111 -- # nvme_fio_test 00:35:50.169 00:49:43 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:35:50.169 00:49:43 -- nvme/nvme.sh@32 -- # ran_fio=false 00:35:50.169 00:49:43 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:35:50.169 00:49:43 -- common/autotest_common.sh@1499 -- # bdfs=() 00:35:50.169 00:49:43 -- common/autotest_common.sh@1499 -- # local bdfs 00:35:50.169 00:49:43 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:50.169 00:49:43 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:50.169 00:49:43 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:35:50.169 00:49:43 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:35:50.169 00:49:43 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:35:50.169 00:49:43 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:35:50.169 00:49:43 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:35:50.169 00:49:43 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:35:50.169 00:49:43 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:35:50.169 00:49:43 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:35:50.169 00:49:43 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:35:50.169 00:49:43 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:35:50.427 00:49:44 -- nvme/nvme.sh@41 -- # bs=4096 00:35:50.427 00:49:44 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:35:50.427 00:49:44 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:35:50.427 00:49:44 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:35:50.427 00:49:44 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:50.427 00:49:44 -- common/autotest_common.sh@1325 -- # local sanitizers 00:35:50.427 00:49:44 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:35:50.427 00:49:44 -- common/autotest_common.sh@1327 -- # shift 00:35:50.427 00:49:44 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:35:50.427 00:49:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.427 00:49:44 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:35:50.427 00:49:44 -- common/autotest_common.sh@1331 -- # grep libasan 00:35:50.427 00:49:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:50.427 00:49:44 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:35:50.427 00:49:44 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:35:50.427 00:49:44 -- common/autotest_common.sh@1333 -- # break 00:35:50.427 00:49:44 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:35:50.427 00:49:44 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:35:50.685 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:35:50.685 fio-3.35 00:35:50.685 Starting 1 thread 00:35:53.967 00:35:53.967 test: (groupid=0, jobs=1): err= 0: pid=150365: Wed Apr 24 00:49:47 2024 00:35:53.967 read: IOPS=18.5k, BW=72.5MiB/s (76.0MB/s)(145MiB/2001msec) 00:35:53.967 slat (nsec): min=4385, max=64898, avg=5501.53, stdev=1491.22 00:35:53.967 clat (usec): min=307, max=8870, avg=3434.19, stdev=662.31 00:35:53.967 lat (usec): min=314, max=8876, avg=3439.69, stdev=663.04 00:35:53.967 clat percentiles (usec): 00:35:53.967 | 1.00th=[ 1893], 5.00th=[ 2573], 10.00th=[ 2933], 20.00th=[ 3032], 00:35:53.967 | 30.00th=[ 3097], 40.00th=[ 3163], 50.00th=[ 3228], 60.00th=[ 3589], 00:35:53.967 | 70.00th=[ 3785], 80.00th=[ 3884], 90.00th=[ 4015], 95.00th=[ 4178], 00:35:53.967 | 99.00th=[ 5669], 99.50th=[ 6587], 99.90th=[ 8160], 99.95th=[ 8455], 00:35:53.967 | 99.99th=[ 8717] 00:35:53.967 bw ( KiB/s): min=61312, max=85032, per=98.86%, avg=73349.33, stdev=11863.98, samples=3 00:35:53.967 iops : min=15328, max=21258, avg=18337.33, stdev=2965.99, samples=3 00:35:53.967 write: IOPS=18.6k, BW=72.5MiB/s (76.0MB/s)(145MiB/2001msec); 0 zone resets 00:35:53.967 slat (nsec): min=4525, max=81732, avg=5655.80, stdev=1500.84 00:35:53.967 clat (usec): min=342, max=8721, avg=3442.66, stdev=672.92 00:35:53.967 lat (usec): min=348, max=8733, avg=3448.32, stdev=673.64 00:35:53.967 clat percentiles (usec): 00:35:53.967 | 1.00th=[ 1926], 5.00th=[ 2606], 10.00th=[ 2933], 20.00th=[ 3032], 00:35:53.967 | 30.00th=[ 3097], 40.00th=[ 3163], 50.00th=[ 3228], 60.00th=[ 3589], 00:35:53.967 | 70.00th=[ 3785], 80.00th=[ 3884], 90.00th=[ 4047], 95.00th=[ 4228], 00:35:53.967 | 99.00th=[ 5800], 99.50th=[ 6718], 99.90th=[ 8160], 99.95th=[ 8586], 00:35:53.967 | 99.99th=[ 8717] 00:35:53.967 bw ( KiB/s): min=61704, max=84720, per=98.61%, avg=73218.67, stdev=11508.01, samples=3 00:35:53.967 iops : min=15426, max=21180, avg=18304.67, stdev=2877.00, samples=3 00:35:53.967 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:35:53.967 lat (msec) : 2=1.27%, 4=87.15%, 10=11.54% 00:35:53.967 cpu : usr=99.85%, sys=0.10%, ctx=7, majf=0, minf=37 00:35:53.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:35:53.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:53.967 issued rwts: total=37116,37143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:53.967 00:35:53.967 Run status group 0 (all jobs): 00:35:53.967 READ: bw=72.5MiB/s (76.0MB/s), 72.5MiB/s-72.5MiB/s (76.0MB/s-76.0MB/s), io=145MiB (152MB), run=2001-2001msec 00:35:53.967 WRITE: bw=72.5MiB/s (76.0MB/s), 72.5MiB/s-72.5MiB/s (76.0MB/s-76.0MB/s), io=145MiB (152MB), run=2001-2001msec 00:35:54.225 ----------------------------------------------------- 00:35:54.225 Suppressions used: 00:35:54.225 count bytes template 00:35:54.225 1 32 /usr/src/fio/parse.c 00:35:54.225 ----------------------------------------------------- 00:35:54.225 00:35:54.225 00:49:47 -- nvme/nvme.sh@44 -- # ran_fio=true 00:35:54.225 00:49:47 -- nvme/nvme.sh@46 -- # true 00:35:54.225 00:35:54.225 real 0m4.375s 00:35:54.225 user 0m3.545s 00:35:54.225 sys 0m0.526s 00:35:54.225 00:49:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:54.225 00:49:47 -- common/autotest_common.sh@10 -- # set +x 00:35:54.225 ************************************ 00:35:54.225 END TEST nvme_fio 00:35:54.225 ************************************ 00:35:54.225 00:35:54.225 real 0m49.937s 00:35:54.225 user 2m12.664s 00:35:54.225 sys 0m10.692s 00:35:54.225 00:49:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:54.225 00:49:47 -- common/autotest_common.sh@10 -- # set +x 00:35:54.225 ************************************ 00:35:54.225 END TEST nvme 00:35:54.225 ************************************ 00:35:54.225 00:49:47 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:35:54.225 00:49:47 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:35:54.225 00:49:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:54.225 00:49:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:54.225 00:49:47 -- common/autotest_common.sh@10 -- # set +x 00:35:54.484 ************************************ 00:35:54.484 START TEST nvme_scc 00:35:54.484 ************************************ 00:35:54.484 00:49:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:35:54.484 * Looking for test storage... 00:35:54.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:35:54.484 00:49:48 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:35:54.484 00:49:48 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:35:54.484 00:49:48 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:35:54.484 00:49:48 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:35:54.484 00:49:48 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:54.484 00:49:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:54.484 00:49:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:54.484 00:49:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:54.484 00:49:48 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:54.484 00:49:48 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:54.484 00:49:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:54.484 00:49:48 -- paths/export.sh@5 -- # export PATH 00:35:54.484 00:49:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:54.484 00:49:48 -- nvme/functions.sh@10 -- # ctrls=() 00:35:54.484 00:49:48 -- nvme/functions.sh@10 -- # declare -A ctrls 00:35:54.484 00:49:48 -- nvme/functions.sh@11 -- # nvmes=() 00:35:54.484 00:49:48 -- nvme/functions.sh@11 -- # declare -A nvmes 00:35:54.484 00:49:48 -- nvme/functions.sh@12 -- # bdfs=() 00:35:54.484 00:49:48 -- nvme/functions.sh@12 -- # declare -A bdfs 00:35:54.484 00:49:48 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:35:54.484 00:49:48 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:35:54.484 00:49:48 -- nvme/functions.sh@14 -- # nvme_name= 00:35:54.484 00:49:48 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:54.484 00:49:48 -- nvme/nvme_scc.sh@12 -- # uname 00:35:54.484 00:49:48 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:35:54.484 00:49:48 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:35:54.484 00:49:48 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:54.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:35:54.743 Waiting for block devices as requested 00:35:55.003 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:55.003 00:49:48 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:35:55.003 00:49:48 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:35:55.003 00:49:48 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:35:55.003 00:49:48 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:35:55.003 00:49:48 -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:35:55.003 00:49:48 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:35:55.003 00:49:48 -- scripts/common.sh@15 -- # local i 00:35:55.003 00:49:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:35:55.003 00:49:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:35:55.003 00:49:48 -- scripts/common.sh@24 -- # return 0 00:35:55.003 00:49:48 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:35:55.003 00:49:48 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:35:55.003 00:49:48 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:35:55.003 00:49:48 -- nvme/functions.sh@18 -- # shift 00:35:55.003 00:49:48 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.003 00:49:48 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.003 00:49:48 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.003 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.003 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.003 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.003 00:49:48 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.003 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.003 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.003 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:35:55.003 00:49:48 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:35:55.003 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.004 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:35:55.004 00:49:48 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:35:55.004 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.005 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:35:55.005 00:49:48 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.005 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.006 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:35:55.006 00:49:48 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.006 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.007 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:35:55.007 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.007 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.007 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:35:55.007 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.007 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.007 00:49:48 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:35:55.007 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.007 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.007 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:35:55.007 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.007 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.007 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:35:55.007 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.007 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.007 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:35:55.007 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.007 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.007 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:35:55.007 00:49:48 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:35:55.007 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.267 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.267 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.267 00:49:48 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.267 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.267 00:49:48 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.267 00:49:48 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:35:55.267 00:49:48 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:35:55.267 00:49:48 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:35:55.267 00:49:48 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:35:55.267 00:49:48 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:35:55.267 00:49:48 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:35:55.267 00:49:48 -- nvme/functions.sh@18 -- # shift 00:35:55.267 00:49:48 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.267 00:49:48 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:35:55.267 00:49:48 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.267 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.267 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.267 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.267 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:35:55.267 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:35:55.267 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.268 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:35:55.268 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.268 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.269 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.269 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.269 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.269 00:49:48 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.269 00:49:48 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.269 00:49:48 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.269 00:49:48 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.269 00:49:48 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.269 00:49:48 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.269 00:49:48 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.269 00:49:48 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.269 00:49:48 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:35:55.269 00:49:48 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # IFS=: 00:35:55.269 00:49:48 -- nvme/functions.sh@21 -- # read -r reg val 00:35:55.269 00:49:48 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:35:55.269 00:49:48 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:35:55.269 00:49:48 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:35:55.269 00:49:48 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:35:55.269 00:49:48 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:35:55.269 00:49:48 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:35:55.269 00:49:48 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:35:55.269 00:49:48 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:35:55.269 00:49:48 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:35:55.269 00:49:48 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:35:55.269 00:49:48 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:35:55.269 00:49:48 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:35:55.269 00:49:48 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:35:55.269 00:49:48 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:35:55.269 00:49:48 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:35:55.269 00:49:48 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:35:55.269 00:49:48 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:35:55.269 00:49:48 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:35:55.269 00:49:48 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:35:55.269 00:49:48 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:35:55.269 00:49:48 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:35:55.269 00:49:48 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:35:55.269 00:49:48 -- nvme/functions.sh@76 -- # echo 0x15d 00:35:55.269 00:49:48 -- nvme/functions.sh@184 -- # oncs=0x15d 00:35:55.269 00:49:48 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:35:55.269 00:49:48 -- nvme/functions.sh@197 -- # echo nvme0 00:35:55.269 00:49:48 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:35:55.269 00:49:48 -- nvme/functions.sh@206 -- # echo nvme0 00:35:55.269 00:49:48 -- nvme/functions.sh@207 -- # return 0 00:35:55.269 00:49:48 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:35:55.269 00:49:48 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:35:55.269 00:49:48 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:55.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:35:55.785 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:56.719 00:49:50 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:35:56.719 00:49:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:35:56.719 00:49:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:56.719 00:49:50 -- common/autotest_common.sh@10 -- # set +x 00:35:56.719 ************************************ 00:35:56.719 START TEST nvme_simple_copy 00:35:56.719 ************************************ 00:35:56.719 00:49:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:35:56.978 Initializing NVMe Controllers 00:35:56.978 Attaching to 0000:00:10.0 00:35:56.978 Controller supports SCC. Attached to 0000:00:10.0 00:35:56.978 Namespace ID: 1 size: 5GB 00:35:56.978 Initialization complete. 00:35:56.978 00:35:56.978 Controller QEMU NVMe Ctrl (12340 ) 00:35:56.978 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:35:56.978 Namespace Block Size:4096 00:35:56.978 Writing LBAs 0 to 63 with Random Data 00:35:56.978 Copied LBAs from 0 - 63 to the Destination LBA 256 00:35:56.978 LBAs matching Written Data: 64 00:35:57.237 00:35:57.237 real 0m0.348s 00:35:57.237 user 0m0.148s 00:35:57.237 sys 0m0.102s 00:35:57.237 00:49:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:57.237 00:49:50 -- common/autotest_common.sh@10 -- # set +x 00:35:57.237 ************************************ 00:35:57.237 END TEST nvme_simple_copy 00:35:57.237 ************************************ 00:35:57.237 00:35:57.237 real 0m2.803s 00:35:57.237 user 0m0.811s 00:35:57.237 sys 0m1.889s 00:35:57.237 00:49:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:57.237 00:49:50 -- common/autotest_common.sh@10 -- # set +x 00:35:57.237 ************************************ 00:35:57.237 END TEST nvme_scc 00:35:57.237 ************************************ 00:35:57.237 00:49:50 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:35:57.237 00:49:50 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:35:57.237 00:49:50 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:35:57.237 00:49:50 -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]] 00:35:57.237 00:49:50 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:35:57.237 00:49:50 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:35:57.237 00:49:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:57.237 00:49:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:57.237 00:49:50 -- common/autotest_common.sh@10 -- # set +x 00:35:57.237 ************************************ 00:35:57.237 START TEST nvme_rpc 00:35:57.237 ************************************ 00:35:57.237 00:49:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:35:57.237 * Looking for test storage... 00:35:57.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:35:57.237 00:49:51 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:57.237 00:49:51 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:35:57.237 00:49:51 -- common/autotest_common.sh@1510 -- # bdfs=() 00:35:57.237 00:49:51 -- common/autotest_common.sh@1510 -- # local bdfs 00:35:57.237 00:49:51 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:35:57.237 00:49:51 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:35:57.237 00:49:51 -- common/autotest_common.sh@1499 -- # bdfs=() 00:35:57.237 00:49:51 -- common/autotest_common.sh@1499 -- # local bdfs 00:35:57.237 00:49:51 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:57.237 00:49:51 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:57.237 00:49:51 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:35:57.496 00:49:51 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:35:57.496 00:49:51 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:35:57.496 00:49:51 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:35:57.496 00:49:51 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:35:57.496 00:49:51 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=150862 00:35:57.496 00:49:51 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:35:57.496 00:49:51 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:35:57.496 00:49:51 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 150862 00:35:57.496 00:49:51 -- common/autotest_common.sh@817 -- # '[' -z 150862 ']' 00:35:57.496 00:49:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:57.496 00:49:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:57.496 00:49:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:57.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:57.496 00:49:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:57.496 00:49:51 -- common/autotest_common.sh@10 -- # set +x 00:35:57.496 [2024-04-24 00:49:51.162081] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:35:57.496 [2024-04-24 00:49:51.162255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150862 ] 00:35:57.755 [2024-04-24 00:49:51.349740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:58.012 [2024-04-24 00:49:51.633866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:58.012 [2024-04-24 00:49:51.633877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:58.956 00:49:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:58.956 00:49:52 -- common/autotest_common.sh@850 -- # return 0 00:35:58.956 00:49:52 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:35:59.220 Nvme0n1 00:35:59.220 00:49:52 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:35:59.220 00:49:52 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:35:59.478 request: 00:35:59.478 { 00:35:59.478 "filename": "non_existing_file", 00:35:59.478 "bdev_name": "Nvme0n1", 00:35:59.478 "method": "bdev_nvme_apply_firmware", 00:35:59.478 "req_id": 1 00:35:59.478 } 00:35:59.478 Got JSON-RPC error response 00:35:59.478 response: 00:35:59.478 { 00:35:59.478 "code": -32603, 00:35:59.478 "message": "open file failed." 00:35:59.478 } 00:35:59.478 00:49:53 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:35:59.478 00:49:53 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:35:59.478 00:49:53 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:35:59.736 00:49:53 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:35:59.736 00:49:53 -- nvme/nvme_rpc.sh@40 -- # killprocess 150862 00:35:59.736 00:49:53 -- common/autotest_common.sh@936 -- # '[' -z 150862 ']' 00:35:59.736 00:49:53 -- common/autotest_common.sh@940 -- # kill -0 150862 00:35:59.736 00:49:53 -- common/autotest_common.sh@941 -- # uname 00:35:59.736 00:49:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:59.736 00:49:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150862 00:35:59.736 00:49:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:59.736 00:49:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:59.736 00:49:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150862' 00:35:59.736 killing process with pid 150862 00:35:59.736 00:49:53 -- common/autotest_common.sh@955 -- # kill 150862 00:35:59.736 00:49:53 -- common/autotest_common.sh@960 -- # wait 150862 00:36:02.281 ************************************ 00:36:02.281 END TEST nvme_rpc 00:36:02.281 ************************************ 00:36:02.281 00:36:02.281 real 0m5.101s 00:36:02.281 user 0m9.662s 00:36:02.281 sys 0m0.727s 00:36:02.281 00:49:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:02.281 00:49:56 -- common/autotest_common.sh@10 -- # set +x 00:36:02.281 00:49:56 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:36:02.281 00:49:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:02.281 00:49:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:02.281 00:49:56 -- common/autotest_common.sh@10 -- # set +x 00:36:02.538 ************************************ 00:36:02.538 START TEST nvme_rpc_timeouts 00:36:02.538 ************************************ 00:36:02.538 00:49:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:36:02.538 * Looking for test storage... 00:36:02.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:36:02.538 00:49:56 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:02.538 00:49:56 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_150962 00:36:02.538 00:49:56 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_150962 00:36:02.538 00:49:56 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=150990 00:36:02.538 00:49:56 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:36:02.538 00:49:56 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:36:02.538 00:49:56 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 150990 00:36:02.538 00:49:56 -- common/autotest_common.sh@817 -- # '[' -z 150990 ']' 00:36:02.538 00:49:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.538 00:49:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:02.538 00:49:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.538 00:49:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:02.538 00:49:56 -- common/autotest_common.sh@10 -- # set +x 00:36:02.538 [2024-04-24 00:49:56.327778] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:36:02.538 [2024-04-24 00:49:56.328051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150990 ] 00:36:02.795 [2024-04-24 00:49:56.518576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:03.052 [2024-04-24 00:49:56.819850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.052 [2024-04-24 00:49:56.819867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.427 00:49:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:04.427 00:49:57 -- common/autotest_common.sh@850 -- # return 0 00:36:04.427 Checking default timeout settings: 00:36:04.427 00:49:57 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:36:04.427 00:49:57 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:36:04.684 Making settings changes with rpc: 00:36:04.684 00:49:58 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:36:04.684 00:49:58 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:36:04.942 Check default vs. modified settings: 00:36:04.942 00:49:58 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:36:04.942 00:49:58 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_150962 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_150962 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:36:05.555 Setting action_on_timeout is changed as expected. 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_150962 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_150962 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:36:05.555 Setting timeout_us is changed as expected. 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_150962 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_150962 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:36:05.555 Setting timeout_admin_us is changed as expected. 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_150962 /tmp/settings_modified_150962 00:36:05.555 00:49:59 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 150990 00:36:05.555 00:49:59 -- common/autotest_common.sh@936 -- # '[' -z 150990 ']' 00:36:05.555 00:49:59 -- common/autotest_common.sh@940 -- # kill -0 150990 00:36:05.555 00:49:59 -- common/autotest_common.sh@941 -- # uname 00:36:05.555 00:49:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:05.555 00:49:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 150990 00:36:05.555 00:49:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:05.555 00:49:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:05.555 killing process with pid 150990 00:36:05.555 00:49:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 150990' 00:36:05.555 00:49:59 -- common/autotest_common.sh@955 -- # kill 150990 00:36:05.555 00:49:59 -- common/autotest_common.sh@960 -- # wait 150990 00:36:08.082 RPC TIMEOUT SETTING TEST PASSED. 00:36:08.082 00:50:01 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:36:08.082 ************************************ 00:36:08.082 END TEST nvme_rpc_timeouts 00:36:08.082 ************************************ 00:36:08.082 00:36:08.082 real 0m5.571s 00:36:08.082 user 0m10.794s 00:36:08.082 sys 0m0.737s 00:36:08.082 00:50:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:08.082 00:50:01 -- common/autotest_common.sh@10 -- # set +x 00:36:08.082 00:50:01 -- spdk/autotest.sh@241 -- # '[' 1 -eq 0 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@245 -- # [[ 0 -eq 1 ]] 00:36:08.082 00:50:01 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@258 -- # timing_exit lib 00:36:08.082 00:50:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:36:08.082 00:50:01 -- common/autotest_common.sh@10 -- # set +x 00:36:08.082 00:50:01 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@277 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:36:08.082 00:50:01 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:36:08.082 00:50:01 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:36:08.082 00:50:01 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:36:08.082 00:50:01 -- spdk/autotest.sh@373 -- # [[ 1 -eq 1 ]] 00:36:08.082 00:50:01 -- spdk/autotest.sh@374 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:36:08.082 00:50:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:36:08.082 00:50:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:08.082 00:50:01 -- common/autotest_common.sh@10 -- # set +x 00:36:08.082 ************************************ 00:36:08.082 START TEST blockdev_raid5f 00:36:08.082 ************************************ 00:36:08.082 00:50:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:36:08.340 * Looking for test storage... 00:36:08.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:36:08.340 00:50:01 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:36:08.340 00:50:01 -- bdev/nbd_common.sh@6 -- # set -e 00:36:08.340 00:50:01 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:36:08.340 00:50:01 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:36:08.340 00:50:01 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:36:08.340 00:50:01 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:36:08.340 00:50:01 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:36:08.340 00:50:01 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:36:08.340 00:50:01 -- bdev/blockdev.sh@20 -- # : 00:36:08.340 00:50:01 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:36:08.340 00:50:01 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:36:08.340 00:50:01 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:36:08.340 00:50:01 -- bdev/blockdev.sh@674 -- # uname -s 00:36:08.340 00:50:01 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:36:08.340 00:50:01 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:36:08.340 00:50:01 -- bdev/blockdev.sh@682 -- # test_type=raid5f 00:36:08.340 00:50:01 -- bdev/blockdev.sh@683 -- # crypto_device= 00:36:08.340 00:50:01 -- bdev/blockdev.sh@684 -- # dek= 00:36:08.340 00:50:01 -- bdev/blockdev.sh@685 -- # env_ctx= 00:36:08.340 00:50:01 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:36:08.340 00:50:01 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:36:08.340 00:50:01 -- bdev/blockdev.sh@690 -- # [[ raid5f == bdev ]] 00:36:08.340 00:50:01 -- bdev/blockdev.sh@690 -- # [[ raid5f == crypto_* ]] 00:36:08.340 00:50:01 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:36:08.340 00:50:01 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=151163 00:36:08.340 00:50:01 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:36:08.340 00:50:01 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:36:08.340 00:50:01 -- bdev/blockdev.sh@49 -- # waitforlisten 151163 00:36:08.340 00:50:01 -- common/autotest_common.sh@817 -- # '[' -z 151163 ']' 00:36:08.340 00:50:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.340 00:50:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:08.340 00:50:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.340 00:50:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:08.340 00:50:01 -- common/autotest_common.sh@10 -- # set +x 00:36:08.340 [2024-04-24 00:50:02.038586] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:36:08.340 [2024-04-24 00:50:02.038933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151163 ] 00:36:08.599 [2024-04-24 00:50:02.199170] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.857 [2024-04-24 00:50:02.420010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:09.805 00:50:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:09.805 00:50:03 -- common/autotest_common.sh@850 -- # return 0 00:36:09.805 00:50:03 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:36:09.805 00:50:03 -- bdev/blockdev.sh@726 -- # setup_raid5f_conf 00:36:09.805 00:50:03 -- bdev/blockdev.sh@280 -- # rpc_cmd 00:36:09.805 00:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:09.805 00:50:03 -- common/autotest_common.sh@10 -- # set +x 00:36:09.805 Malloc0 00:36:09.805 Malloc1 00:36:09.805 Malloc2 00:36:09.805 00:50:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:09.806 00:50:03 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:36:09.806 00:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:09.806 00:50:03 -- common/autotest_common.sh@10 -- # set +x 00:36:09.806 00:50:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:09.806 00:50:03 -- bdev/blockdev.sh@740 -- # cat 00:36:09.806 00:50:03 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:36:09.806 00:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:09.806 00:50:03 -- common/autotest_common.sh@10 -- # set +x 00:36:09.806 00:50:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:09.806 00:50:03 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:36:09.806 00:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:09.806 00:50:03 -- common/autotest_common.sh@10 -- # set +x 00:36:09.806 00:50:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:09.806 00:50:03 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:36:09.806 00:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:09.806 00:50:03 -- common/autotest_common.sh@10 -- # set +x 00:36:09.806 00:50:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:09.806 00:50:03 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:36:09.806 00:50:03 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:36:09.806 00:50:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:09.806 00:50:03 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:36:09.806 00:50:03 -- common/autotest_common.sh@10 -- # set +x 00:36:09.806 00:50:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:10.064 00:50:03 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:36:10.064 00:50:03 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "c7552187-0e4b-4abc-bb83-1367b76e43d7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c7552187-0e4b-4abc-bb83-1367b76e43d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "c7552187-0e4b-4abc-bb83-1367b76e43d7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4a15f15f-f430-4543-82c0-bf4148a5fc71",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "dedc8337-e299-4242-8b07-7c0af54fc5a8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "a1232378-3212-4b03-9dc8-4817370e39f6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:36:10.064 00:50:03 -- bdev/blockdev.sh@749 -- # jq -r .name 00:36:10.065 00:50:03 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:36:10.065 00:50:03 -- bdev/blockdev.sh@752 -- # hello_world_bdev=raid5f 00:36:10.065 00:50:03 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:36:10.065 00:50:03 -- bdev/blockdev.sh@754 -- # killprocess 151163 00:36:10.065 00:50:03 -- common/autotest_common.sh@936 -- # '[' -z 151163 ']' 00:36:10.065 00:50:03 -- common/autotest_common.sh@940 -- # kill -0 151163 00:36:10.065 00:50:03 -- common/autotest_common.sh@941 -- # uname 00:36:10.065 00:50:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:10.065 00:50:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151163 00:36:10.065 00:50:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:10.065 00:50:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:10.065 00:50:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151163' 00:36:10.065 killing process with pid 151163 00:36:10.065 00:50:03 -- common/autotest_common.sh@955 -- # kill 151163 00:36:10.065 00:50:03 -- common/autotest_common.sh@960 -- # wait 151163 00:36:13.348 00:50:06 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:13.348 00:50:06 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:36:13.348 00:50:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:36:13.348 00:50:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:13.348 00:50:06 -- common/autotest_common.sh@10 -- # set +x 00:36:13.348 ************************************ 00:36:13.348 START TEST bdev_hello_world 00:36:13.348 ************************************ 00:36:13.348 00:50:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:36:13.348 [2024-04-24 00:50:06.805432] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:36:13.348 [2024-04-24 00:50:06.805623] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151239 ] 00:36:13.348 [2024-04-24 00:50:06.984725] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.606 [2024-04-24 00:50:07.230746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:14.173 [2024-04-24 00:50:07.896502] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:36:14.173 [2024-04-24 00:50:07.896575] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:36:14.173 [2024-04-24 00:50:07.896625] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:36:14.173 [2024-04-24 00:50:07.897291] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:36:14.173 [2024-04-24 00:50:07.897491] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:36:14.173 [2024-04-24 00:50:07.897536] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:36:14.173 [2024-04-24 00:50:07.897619] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:36:14.173 00:36:14.173 [2024-04-24 00:50:07.897655] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:36:16.122 ************************************ 00:36:16.122 END TEST bdev_hello_world 00:36:16.122 ************************************ 00:36:16.122 00:36:16.122 real 0m3.157s 00:36:16.122 user 0m2.732s 00:36:16.122 sys 0m0.308s 00:36:16.122 00:50:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:16.122 00:50:09 -- common/autotest_common.sh@10 -- # set +x 00:36:16.380 00:50:09 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:36:16.380 00:50:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:36:16.380 00:50:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:16.380 00:50:09 -- common/autotest_common.sh@10 -- # set +x 00:36:16.380 ************************************ 00:36:16.380 START TEST bdev_bounds 00:36:16.380 ************************************ 00:36:16.380 Process bdevio pid: 151313 00:36:16.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.380 00:50:09 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:36:16.380 00:50:09 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:36:16.380 00:50:09 -- bdev/blockdev.sh@290 -- # bdevio_pid=151313 00:36:16.380 00:50:09 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:36:16.380 00:50:09 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 151313' 00:36:16.380 00:50:09 -- bdev/blockdev.sh@293 -- # waitforlisten 151313 00:36:16.380 00:50:09 -- common/autotest_common.sh@817 -- # '[' -z 151313 ']' 00:36:16.380 00:50:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.380 00:50:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:16.380 00:50:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.380 00:50:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:16.380 00:50:09 -- common/autotest_common.sh@10 -- # set +x 00:36:16.380 [2024-04-24 00:50:10.056024] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:36:16.380 [2024-04-24 00:50:10.056481] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151313 ] 00:36:16.638 [2024-04-24 00:50:10.263542] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:16.900 [2024-04-24 00:50:10.491299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.900 [2024-04-24 00:50:10.491386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.900 [2024-04-24 00:50:10.491384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:17.470 00:50:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:17.471 00:50:11 -- common/autotest_common.sh@850 -- # return 0 00:36:17.471 00:50:11 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:36:17.471 I/O targets: 00:36:17.471 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:36:17.471 00:36:17.471 00:36:17.471 CUnit - A unit testing framework for C - Version 2.1-3 00:36:17.471 http://cunit.sourceforge.net/ 00:36:17.471 00:36:17.471 00:36:17.471 Suite: bdevio tests on: raid5f 00:36:17.471 Test: blockdev write read block ...passed 00:36:17.471 Test: blockdev write zeroes read block ...passed 00:36:17.471 Test: blockdev write zeroes read no split ...passed 00:36:17.729 Test: blockdev write zeroes read split ...passed 00:36:17.729 Test: blockdev write zeroes read split partial ...passed 00:36:17.729 Test: blockdev reset ...passed 00:36:17.729 Test: blockdev write read 8 blocks ...passed 00:36:17.729 Test: blockdev write read size > 128k ...passed 00:36:17.729 Test: blockdev write read invalid size ...passed 00:36:17.729 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:17.729 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:17.729 Test: blockdev write read max offset ...passed 00:36:17.729 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:17.729 Test: blockdev writev readv 8 blocks ...passed 00:36:17.729 Test: blockdev writev readv 30 x 1block ...passed 00:36:17.729 Test: blockdev writev readv block ...passed 00:36:17.729 Test: blockdev writev readv size > 128k ...passed 00:36:17.729 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:17.729 Test: blockdev comparev and writev ...passed 00:36:17.729 Test: blockdev nvme passthru rw ...passed 00:36:17.729 Test: blockdev nvme passthru vendor specific ...passed 00:36:17.729 Test: blockdev nvme admin passthru ...passed 00:36:17.729 Test: blockdev copy ...passed 00:36:17.729 00:36:17.729 Run Summary: Type Total Ran Passed Failed Inactive 00:36:17.729 suites 1 1 n/a 0 0 00:36:17.729 tests 23 23 23 0 0 00:36:17.729 asserts 130 130 130 0 n/a 00:36:17.729 00:36:17.729 Elapsed time = 0.575 seconds 00:36:17.729 0 00:36:17.729 00:50:11 -- bdev/blockdev.sh@295 -- # killprocess 151313 00:36:17.729 00:50:11 -- common/autotest_common.sh@936 -- # '[' -z 151313 ']' 00:36:17.729 00:50:11 -- common/autotest_common.sh@940 -- # kill -0 151313 00:36:17.729 00:50:11 -- common/autotest_common.sh@941 -- # uname 00:36:17.729 00:50:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:17.729 00:50:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151313 00:36:17.729 00:50:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:17.729 00:50:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:17.729 00:50:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151313' 00:36:17.729 killing process with pid 151313 00:36:17.729 00:50:11 -- common/autotest_common.sh@955 -- # kill 151313 00:36:17.729 00:50:11 -- common/autotest_common.sh@960 -- # wait 151313 00:36:19.667 00:50:13 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:36:19.667 00:36:19.667 real 0m3.239s 00:36:19.667 user 0m7.682s 00:36:19.667 sys 0m0.400s 00:36:19.667 00:50:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:19.667 00:50:13 -- common/autotest_common.sh@10 -- # set +x 00:36:19.667 ************************************ 00:36:19.667 END TEST bdev_bounds 00:36:19.667 ************************************ 00:36:19.667 00:50:13 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:36:19.667 00:50:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:36:19.667 00:50:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:19.667 00:50:13 -- common/autotest_common.sh@10 -- # set +x 00:36:19.667 ************************************ 00:36:19.667 START TEST bdev_nbd 00:36:19.667 ************************************ 00:36:19.667 00:50:13 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:36:19.667 00:50:13 -- bdev/blockdev.sh@300 -- # uname -s 00:36:19.667 00:50:13 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:36:19.667 00:50:13 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:19.667 00:50:13 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:36:19.667 00:50:13 -- bdev/blockdev.sh@304 -- # bdev_all=('raid5f') 00:36:19.667 00:50:13 -- bdev/blockdev.sh@304 -- # local bdev_all 00:36:19.667 00:50:13 -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:36:19.667 00:50:13 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:36:19.667 00:50:13 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:36:19.667 00:50:13 -- bdev/blockdev.sh@311 -- # local nbd_all 00:36:19.667 00:50:13 -- bdev/blockdev.sh@312 -- # bdev_num=1 00:36:19.667 00:50:13 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:36:19.667 00:50:13 -- bdev/blockdev.sh@314 -- # local nbd_list 00:36:19.667 00:50:13 -- bdev/blockdev.sh@315 -- # bdev_list=('raid5f') 00:36:19.667 00:50:13 -- bdev/blockdev.sh@315 -- # local bdev_list 00:36:19.667 00:50:13 -- bdev/blockdev.sh@318 -- # nbd_pid=151387 00:36:19.667 00:50:13 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:36:19.667 00:50:13 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:36:19.667 00:50:13 -- bdev/blockdev.sh@320 -- # waitforlisten 151387 /var/tmp/spdk-nbd.sock 00:36:19.667 00:50:13 -- common/autotest_common.sh@817 -- # '[' -z 151387 ']' 00:36:19.667 00:50:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:36:19.667 00:50:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:19.667 00:50:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:36:19.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:36:19.667 00:50:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:19.667 00:50:13 -- common/autotest_common.sh@10 -- # set +x 00:36:19.667 [2024-04-24 00:50:13.396566] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:36:19.667 [2024-04-24 00:50:13.397089] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.925 [2024-04-24 00:50:13.580799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.184 [2024-04-24 00:50:13.834636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:20.751 00:50:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:20.751 00:50:14 -- common/autotest_common.sh@850 -- # return 0 00:36:20.751 00:50:14 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:36:20.751 00:50:14 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:20.751 00:50:14 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:36:20.751 00:50:14 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:36:20.751 00:50:14 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:36:20.751 00:50:14 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:20.751 00:50:14 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:36:20.751 00:50:14 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:36:20.751 00:50:14 -- bdev/nbd_common.sh@24 -- # local i 00:36:20.751 00:50:14 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:36:20.751 00:50:14 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:36:20.751 00:50:14 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:36:20.751 00:50:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:36:21.316 00:50:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:36:21.316 00:50:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:36:21.316 00:50:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:36:21.316 00:50:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:36:21.316 00:50:14 -- common/autotest_common.sh@855 -- # local i 00:36:21.316 00:50:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:36:21.316 00:50:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:36:21.316 00:50:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:36:21.316 00:50:14 -- common/autotest_common.sh@859 -- # break 00:36:21.316 00:50:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:36:21.316 00:50:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:36:21.316 00:50:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:21.316 1+0 records in 00:36:21.316 1+0 records out 00:36:21.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548589 s, 7.5 MB/s 00:36:21.316 00:50:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:21.316 00:50:14 -- common/autotest_common.sh@872 -- # size=4096 00:36:21.316 00:50:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:21.316 00:50:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:36:21.316 00:50:14 -- common/autotest_common.sh@875 -- # return 0 00:36:21.316 00:50:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:36:21.316 00:50:14 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:36:21.316 00:50:14 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:21.573 00:50:15 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:36:21.573 { 00:36:21.573 "nbd_device": "/dev/nbd0", 00:36:21.573 "bdev_name": "raid5f" 00:36:21.573 } 00:36:21.573 ]' 00:36:21.573 00:50:15 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:36:21.573 00:50:15 -- bdev/nbd_common.sh@119 -- # echo '[ 00:36:21.573 { 00:36:21.573 "nbd_device": "/dev/nbd0", 00:36:21.573 "bdev_name": "raid5f" 00:36:21.573 } 00:36:21.573 ]' 00:36:21.573 00:50:15 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:36:21.573 00:50:15 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:36:21.573 00:50:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:21.573 00:50:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:21.573 00:50:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:21.573 00:50:15 -- bdev/nbd_common.sh@51 -- # local i 00:36:21.573 00:50:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:21.573 00:50:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:36:21.830 00:50:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:21.830 00:50:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:21.830 00:50:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:21.830 00:50:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:21.830 00:50:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:21.830 00:50:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:21.830 00:50:15 -- bdev/nbd_common.sh@41 -- # break 00:36:21.830 00:50:15 -- bdev/nbd_common.sh@45 -- # return 0 00:36:21.830 00:50:15 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:36:21.830 00:50:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:21.830 00:50:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@65 -- # true 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@65 -- # count=0 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@122 -- # count=0 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@127 -- # return 0 00:36:22.091 00:50:15 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@12 -- # local i 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:22.091 00:50:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:36:22.349 /dev/nbd0 00:36:22.350 00:50:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:22.350 00:50:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:22.350 00:50:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:36:22.350 00:50:16 -- common/autotest_common.sh@855 -- # local i 00:36:22.350 00:50:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:36:22.350 00:50:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:36:22.350 00:50:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:36:22.350 00:50:16 -- common/autotest_common.sh@859 -- # break 00:36:22.350 00:50:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:36:22.350 00:50:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:36:22.350 00:50:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:22.350 1+0 records in 00:36:22.350 1+0 records out 00:36:22.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000806333 s, 5.1 MB/s 00:36:22.607 00:50:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:22.607 00:50:16 -- common/autotest_common.sh@872 -- # size=4096 00:36:22.607 00:50:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:22.607 00:50:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:36:22.607 00:50:16 -- common/autotest_common.sh@875 -- # return 0 00:36:22.607 00:50:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:22.607 00:50:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:22.607 00:50:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:36:22.607 00:50:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:22.607 00:50:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:36:22.865 { 00:36:22.865 "nbd_device": "/dev/nbd0", 00:36:22.865 "bdev_name": "raid5f" 00:36:22.865 } 00:36:22.865 ]' 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:36:22.865 { 00:36:22.865 "nbd_device": "/dev/nbd0", 00:36:22.865 "bdev_name": "raid5f" 00:36:22.865 } 00:36:22.865 ]' 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@65 -- # count=1 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@66 -- # echo 1 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@95 -- # count=1 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@71 -- # local operation=write 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:36:22.865 256+0 records in 00:36:22.865 256+0 records out 00:36:22.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106841 s, 98.1 MB/s 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:22.865 00:50:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:36:22.866 256+0 records in 00:36:22.866 256+0 records out 00:36:22.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0378058 s, 27.7 MB/s 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@51 -- # local i 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:22.866 00:50:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:36:23.124 00:50:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:23.124 00:50:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:23.124 00:50:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:23.124 00:50:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:23.124 00:50:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:23.124 00:50:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:23.124 00:50:16 -- bdev/nbd_common.sh@41 -- # break 00:36:23.124 00:50:16 -- bdev/nbd_common.sh@45 -- # return 0 00:36:23.124 00:50:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:36:23.124 00:50:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:23.124 00:50:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@65 -- # true 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@65 -- # count=0 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@104 -- # count=0 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@109 -- # return 0 00:36:23.382 00:50:17 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:36:23.382 00:50:17 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:36:23.640 malloc_lvol_verify 00:36:23.640 00:50:17 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:36:23.898 63ef87e6-8211-434f-ad1b-f69d73a7c16c 00:36:23.898 00:50:17 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:36:24.155 c35eb908-0237-42b4-887d-1f7b31ef94bf 00:36:24.155 00:50:17 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:36:24.413 /dev/nbd0 00:36:24.413 00:50:18 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:36:24.413 mke2fs 1.46.5 (30-Dec-2021) 00:36:24.413 00:36:24.413 Filesystem too small for a journal 00:36:24.413 Discarding device blocks: 0/1024 done 00:36:24.413 Creating filesystem with 1024 4k blocks and 1024 inodes 00:36:24.413 00:36:24.413 Allocating group tables: 0/1 done 00:36:24.413 Writing inode tables: 0/1 done 00:36:24.413 Writing superblocks and filesystem accounting information: 0/1 done 00:36:24.413 00:36:24.413 00:50:18 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:36:24.413 00:50:18 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:36:24.413 00:50:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:24.413 00:50:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:24.413 00:50:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:24.413 00:50:18 -- bdev/nbd_common.sh@51 -- # local i 00:36:24.413 00:50:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:24.413 00:50:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:36:24.671 00:50:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:24.671 00:50:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:24.671 00:50:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:24.671 00:50:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:24.671 00:50:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:24.671 00:50:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:24.671 00:50:18 -- bdev/nbd_common.sh@41 -- # break 00:36:24.671 00:50:18 -- bdev/nbd_common.sh@45 -- # return 0 00:36:24.671 00:50:18 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:36:24.671 00:50:18 -- bdev/nbd_common.sh@147 -- # return 0 00:36:24.671 00:50:18 -- bdev/blockdev.sh@326 -- # killprocess 151387 00:36:24.671 00:50:18 -- common/autotest_common.sh@936 -- # '[' -z 151387 ']' 00:36:24.671 00:50:18 -- common/autotest_common.sh@940 -- # kill -0 151387 00:36:24.671 00:50:18 -- common/autotest_common.sh@941 -- # uname 00:36:24.671 00:50:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:24.671 00:50:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 151387 00:36:24.671 killing process with pid 151387 00:36:24.671 00:50:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:24.671 00:50:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:24.671 00:50:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 151387' 00:36:24.671 00:50:18 -- common/autotest_common.sh@955 -- # kill 151387 00:36:24.671 00:50:18 -- common/autotest_common.sh@960 -- # wait 151387 00:36:26.612 ************************************ 00:36:26.612 END TEST bdev_nbd 00:36:26.612 ************************************ 00:36:26.612 00:50:20 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:36:26.612 00:36:26.612 real 0m6.997s 00:36:26.612 user 0m9.641s 00:36:26.612 sys 0m1.498s 00:36:26.612 00:50:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:26.612 00:50:20 -- common/autotest_common.sh@10 -- # set +x 00:36:26.612 00:50:20 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:36:26.612 00:50:20 -- bdev/blockdev.sh@764 -- # '[' raid5f = nvme ']' 00:36:26.612 00:50:20 -- bdev/blockdev.sh@764 -- # '[' raid5f = gpt ']' 00:36:26.612 00:50:20 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:36:26.612 00:50:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:36:26.612 00:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:26.612 00:50:20 -- common/autotest_common.sh@10 -- # set +x 00:36:26.612 ************************************ 00:36:26.612 START TEST bdev_fio 00:36:26.612 ************************************ 00:36:26.612 00:50:20 -- common/autotest_common.sh@1111 -- # fio_test_suite '' 00:36:26.612 00:50:20 -- bdev/blockdev.sh@331 -- # local env_context 00:36:26.612 00:50:20 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:36:26.612 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:36:26.612 00:50:20 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:36:26.871 00:50:20 -- bdev/blockdev.sh@339 -- # echo '' 00:36:26.871 00:50:20 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:36:26.871 00:50:20 -- bdev/blockdev.sh@339 -- # env_context= 00:36:26.871 00:50:20 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:36:26.871 00:50:20 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:26.871 00:50:20 -- common/autotest_common.sh@1267 -- # local workload=verify 00:36:26.871 00:50:20 -- common/autotest_common.sh@1268 -- # local bdev_type=AIO 00:36:26.871 00:50:20 -- common/autotest_common.sh@1269 -- # local env_context= 00:36:26.871 00:50:20 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:36:26.871 00:50:20 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:36:26.871 00:50:20 -- common/autotest_common.sh@1277 -- # '[' -z verify ']' 00:36:26.871 00:50:20 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:36:26.871 00:50:20 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:26.871 00:50:20 -- common/autotest_common.sh@1287 -- # cat 00:36:26.871 00:50:20 -- common/autotest_common.sh@1299 -- # '[' verify == verify ']' 00:36:26.871 00:50:20 -- common/autotest_common.sh@1300 -- # cat 00:36:26.871 00:50:20 -- common/autotest_common.sh@1309 -- # '[' AIO == AIO ']' 00:36:26.871 00:50:20 -- common/autotest_common.sh@1310 -- # /usr/src/fio/fio --version 00:36:26.871 00:50:20 -- common/autotest_common.sh@1310 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:36:26.871 00:50:20 -- common/autotest_common.sh@1311 -- # echo serialize_overlap=1 00:36:26.872 00:50:20 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:36:26.872 00:50:20 -- bdev/blockdev.sh@342 -- # echo '[job_raid5f]' 00:36:26.872 00:50:20 -- bdev/blockdev.sh@343 -- # echo filename=raid5f 00:36:26.872 00:50:20 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:36:26.872 00:50:20 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:36:26.872 00:50:20 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:36:26.872 00:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:26.872 00:50:20 -- common/autotest_common.sh@10 -- # set +x 00:36:26.872 ************************************ 00:36:26.872 START TEST bdev_fio_rw_verify 00:36:26.872 ************************************ 00:36:26.872 00:50:20 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:36:26.872 00:50:20 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:36:26.872 00:50:20 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:36:26.872 00:50:20 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:26.872 00:50:20 -- common/autotest_common.sh@1325 -- # local sanitizers 00:36:26.872 00:50:20 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:26.872 00:50:20 -- common/autotest_common.sh@1327 -- # shift 00:36:26.872 00:50:20 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:36:26.872 00:50:20 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:36:26.872 00:50:20 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:26.872 00:50:20 -- common/autotest_common.sh@1331 -- # grep libasan 00:36:26.872 00:50:20 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:36:26.872 00:50:20 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:36:26.872 00:50:20 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:36:26.872 00:50:20 -- common/autotest_common.sh@1333 -- # break 00:36:26.872 00:50:20 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:26.872 00:50:20 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:36:27.130 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:27.130 fio-3.35 00:36:27.130 Starting 1 thread 00:36:39.329 00:36:39.329 job_raid5f: (groupid=0, jobs=1): err= 0: pid=151646: Wed Apr 24 00:50:31 2024 00:36:39.329 read: IOPS=8235, BW=32.2MiB/s (33.7MB/s)(322MiB/10001msec) 00:36:39.329 slat (usec): min=23, max=124, avg=28.37, stdev= 3.84 00:36:39.329 clat (usec): min=13, max=746, avg=189.14, stdev=69.81 00:36:39.329 lat (usec): min=41, max=794, avg=217.51, stdev=70.75 00:36:39.329 clat percentiles (usec): 00:36:39.329 | 50.000th=[ 192], 99.000th=[ 351], 99.900th=[ 400], 99.990th=[ 545], 00:36:39.329 | 99.999th=[ 750] 00:36:39.329 write: IOPS=8608, BW=33.6MiB/s (35.3MB/s)(332MiB/9865msec); 0 zone resets 00:36:39.329 slat (usec): min=10, max=279, avg=25.59, stdev= 8.31 00:36:39.329 clat (usec): min=79, max=4120, avg=440.93, stdev=162.79 00:36:39.329 lat (usec): min=101, max=4399, avg=466.52, stdev=168.50 00:36:39.329 clat percentiles (usec): 00:36:39.329 | 50.000th=[ 433], 99.000th=[ 668], 99.900th=[ 2638], 99.990th=[ 3064], 00:36:39.329 | 99.999th=[ 4113] 00:36:39.329 bw ( KiB/s): min=26512, max=38640, per=99.13%, avg=34134.32, stdev=3281.10, samples=19 00:36:39.329 iops : min= 6628, max= 9660, avg=8533.58, stdev=820.27, samples=19 00:36:39.329 lat (usec) : 20=0.01%, 100=5.54%, 250=33.48%, 500=54.51%, 750=6.21% 00:36:39.329 lat (usec) : 1000=0.02% 00:36:39.330 lat (msec) : 2=0.01%, 4=0.24%, 10=0.01% 00:36:39.330 cpu : usr=99.36%, sys=0.53%, ctx=204, majf=0, minf=5834 00:36:39.330 IO depths : 1=7.8%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:39.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.330 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.330 issued rwts: total=82368,84925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.330 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:39.330 00:36:39.330 Run status group 0 (all jobs): 00:36:39.330 READ: bw=32.2MiB/s (33.7MB/s), 32.2MiB/s-32.2MiB/s (33.7MB/s-33.7MB/s), io=322MiB (337MB), run=10001-10001msec 00:36:39.330 WRITE: bw=33.6MiB/s (35.3MB/s), 33.6MiB/s-33.6MiB/s (35.3MB/s-35.3MB/s), io=332MiB (348MB), run=9865-9865msec 00:36:40.263 ----------------------------------------------------- 00:36:40.263 Suppressions used: 00:36:40.263 count bytes template 00:36:40.263 1 7 /usr/src/fio/parse.c 00:36:40.263 18 1728 /usr/src/fio/iolog.c 00:36:40.263 1 904 libcrypto.so 00:36:40.263 ----------------------------------------------------- 00:36:40.263 00:36:40.263 ************************************ 00:36:40.263 END TEST bdev_fio_rw_verify 00:36:40.263 ************************************ 00:36:40.263 00:36:40.263 real 0m13.343s 00:36:40.263 user 0m14.636s 00:36:40.263 sys 0m0.930s 00:36:40.263 00:50:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:40.263 00:50:33 -- common/autotest_common.sh@10 -- # set +x 00:36:40.263 00:50:33 -- bdev/blockdev.sh@350 -- # rm -f 00:36:40.263 00:50:33 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:40.263 00:50:33 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:36:40.263 00:50:33 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:40.263 00:50:33 -- common/autotest_common.sh@1267 -- # local workload=trim 00:36:40.263 00:50:33 -- common/autotest_common.sh@1268 -- # local bdev_type= 00:36:40.263 00:50:33 -- common/autotest_common.sh@1269 -- # local env_context= 00:36:40.263 00:50:33 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:36:40.263 00:50:33 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:36:40.263 00:50:33 -- common/autotest_common.sh@1277 -- # '[' -z trim ']' 00:36:40.263 00:50:33 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:36:40.263 00:50:33 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:40.263 00:50:33 -- common/autotest_common.sh@1287 -- # cat 00:36:40.263 00:50:33 -- common/autotest_common.sh@1299 -- # '[' trim == verify ']' 00:36:40.263 00:50:33 -- common/autotest_common.sh@1314 -- # '[' trim == trim ']' 00:36:40.263 00:50:33 -- common/autotest_common.sh@1315 -- # echo rw=trimwrite 00:36:40.263 00:50:33 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "c7552187-0e4b-4abc-bb83-1367b76e43d7"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c7552187-0e4b-4abc-bb83-1367b76e43d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "c7552187-0e4b-4abc-bb83-1367b76e43d7",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4a15f15f-f430-4543-82c0-bf4148a5fc71",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "dedc8337-e299-4242-8b07-7c0af54fc5a8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "a1232378-3212-4b03-9dc8-4817370e39f6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:36:40.263 00:50:33 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:36:40.263 00:50:33 -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:36:40.263 00:50:33 -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:40.263 00:50:33 -- bdev/blockdev.sh@362 -- # popd 00:36:40.263 /home/vagrant/spdk_repo/spdk 00:36:40.263 00:50:33 -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:36:40.263 00:50:33 -- bdev/blockdev.sh@364 -- # return 0 00:36:40.263 00:36:40.263 real 0m13.588s 00:36:40.263 user 0m14.782s 00:36:40.263 sys 0m1.023s 00:36:40.263 00:50:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:40.263 00:50:33 -- common/autotest_common.sh@10 -- # set +x 00:36:40.263 ************************************ 00:36:40.263 END TEST bdev_fio 00:36:40.263 ************************************ 00:36:40.263 00:50:34 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:40.263 00:50:34 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:40.263 00:50:34 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:36:40.263 00:50:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:40.263 00:50:34 -- common/autotest_common.sh@10 -- # set +x 00:36:40.520 ************************************ 00:36:40.520 START TEST bdev_verify 00:36:40.520 ************************************ 00:36:40.520 00:50:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:40.520 [2024-04-24 00:50:34.147790] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:36:40.520 [2024-04-24 00:50:34.148213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151829 ] 00:36:40.778 [2024-04-24 00:50:34.319683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:40.778 [2024-04-24 00:50:34.554391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:40.778 [2024-04-24 00:50:34.554397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:41.776 Running I/O for 5 seconds... 00:36:47.037 00:36:47.037 Latency(us) 00:36:47.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:47.037 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:47.037 Verification LBA range: start 0x0 length 0x2000 00:36:47.037 raid5f : 5.02 6259.65 24.45 0.00 0.00 30642.52 125.81 29709.65 00:36:47.037 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:36:47.037 Verification LBA range: start 0x2000 length 0x2000 00:36:47.037 raid5f : 5.01 6114.58 23.89 0.00 0.00 31327.94 325.73 32206.26 00:36:47.037 =================================================================================================================== 00:36:47.037 Total : 12374.23 48.34 0.00 0.00 30980.91 125.81 32206.26 00:36:48.411 ************************************ 00:36:48.411 END TEST bdev_verify 00:36:48.411 ************************************ 00:36:48.411 00:36:48.411 real 0m7.952s 00:36:48.411 user 0m14.461s 00:36:48.411 sys 0m0.305s 00:36:48.411 00:50:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:48.411 00:50:42 -- common/autotest_common.sh@10 -- # set +x 00:36:48.411 00:50:42 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:48.411 00:50:42 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:36:48.411 00:50:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:48.411 00:50:42 -- common/autotest_common.sh@10 -- # set +x 00:36:48.411 ************************************ 00:36:48.412 START TEST bdev_verify_big_io 00:36:48.412 ************************************ 00:36:48.412 00:50:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:36:48.669 [2024-04-24 00:50:42.206634] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:36:48.669 [2024-04-24 00:50:42.207121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151942 ] 00:36:48.669 [2024-04-24 00:50:42.376854] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:48.936 [2024-04-24 00:50:42.678029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.936 [2024-04-24 00:50:42.678034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:49.502 Running I/O for 5 seconds... 00:36:56.100 00:36:56.100 Latency(us) 00:36:56.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:56.100 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:36:56.100 Verification LBA range: start 0x0 length 0x200 00:36:56.100 raid5f : 5.32 357.94 22.37 0.00 0.00 8793906.51 259.41 393465.66 00:36:56.100 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:36:56.100 Verification LBA range: start 0x200 length 0x200 00:36:56.100 raid5f : 5.27 361.49 22.59 0.00 0.00 8651166.37 182.37 389471.09 00:36:56.100 =================================================================================================================== 00:36:56.100 Total : 719.43 44.96 0.00 0.00 8722517.69 182.37 393465.66 00:36:56.668 00:36:56.668 real 0m8.332s 00:36:56.668 user 0m15.157s 00:36:56.668 sys 0m0.325s 00:36:56.668 00:50:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:56.668 00:50:50 -- common/autotest_common.sh@10 -- # set +x 00:36:56.668 ************************************ 00:36:56.668 END TEST bdev_verify_big_io 00:36:56.668 ************************************ 00:36:56.926 00:50:50 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:56.926 00:50:50 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:36:56.926 00:50:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:56.926 00:50:50 -- common/autotest_common.sh@10 -- # set +x 00:36:56.926 ************************************ 00:36:56.926 START TEST bdev_write_zeroes 00:36:56.926 ************************************ 00:36:56.926 00:50:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:36:56.926 [2024-04-24 00:50:50.642887] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:36:56.927 [2024-04-24 00:50:50.643163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152062 ] 00:36:57.185 [2024-04-24 00:50:50.839399] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.443 [2024-04-24 00:50:51.109382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:58.008 Running I/O for 1 seconds... 00:36:58.942 00:36:58.942 Latency(us) 00:36:58.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.942 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:36:58.942 raid5f : 1.01 20695.49 80.84 0.00 0.00 6161.79 1669.61 7552.24 00:36:58.942 =================================================================================================================== 00:36:58.942 Total : 20695.49 80.84 0.00 0.00 6161.79 1669.61 7552.24 00:37:00.875 00:37:00.875 real 0m3.942s 00:37:00.875 user 0m3.502s 00:37:00.875 sys 0m0.326s 00:37:00.875 00:50:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:00.875 00:50:54 -- common/autotest_common.sh@10 -- # set +x 00:37:00.875 ************************************ 00:37:00.876 END TEST bdev_write_zeroes 00:37:00.876 ************************************ 00:37:00.876 00:50:54 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:00.876 00:50:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:37:00.876 00:50:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:00.876 00:50:54 -- common/autotest_common.sh@10 -- # set +x 00:37:00.876 ************************************ 00:37:00.876 START TEST bdev_json_nonenclosed 00:37:00.876 ************************************ 00:37:00.876 00:50:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:01.133 [2024-04-24 00:50:54.675278] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:37:01.133 [2024-04-24 00:50:54.675494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152137 ] 00:37:01.133 [2024-04-24 00:50:54.871682] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.392 [2024-04-24 00:50:55.147672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:01.392 [2024-04-24 00:50:55.147808] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:37:01.392 [2024-04-24 00:50:55.147858] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:37:01.392 [2024-04-24 00:50:55.147892] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:01.958 00:37:01.958 real 0m1.025s 00:37:01.958 user 0m0.747s 00:37:01.958 sys 0m0.176s 00:37:01.958 00:50:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:01.958 00:50:55 -- common/autotest_common.sh@10 -- # set +x 00:37:01.958 ************************************ 00:37:01.958 END TEST bdev_json_nonenclosed 00:37:01.958 ************************************ 00:37:01.958 00:50:55 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:01.958 00:50:55 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:37:01.958 00:50:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:37:01.959 00:50:55 -- common/autotest_common.sh@10 -- # set +x 00:37:01.959 ************************************ 00:37:01.959 START TEST bdev_json_nonarray 00:37:01.959 ************************************ 00:37:01.959 00:50:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:02.217 [2024-04-24 00:50:55.813872] Starting SPDK v24.05-pre git sha1 9fa7361db / DPDK 23.11.0 initialization... 00:37:02.217 [2024-04-24 00:50:55.814072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152181 ] 00:37:02.217 [2024-04-24 00:50:56.007108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.544 [2024-04-24 00:50:56.283473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:02.544 [2024-04-24 00:50:56.283698] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:37:02.544 [2024-04-24 00:50:56.283780] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:37:02.544 [2024-04-24 00:50:56.283836] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:03.111 00:37:03.111 real 0m1.040s 00:37:03.111 user 0m0.776s 00:37:03.111 sys 0m0.165s 00:37:03.111 00:50:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:03.111 00:50:56 -- common/autotest_common.sh@10 -- # set +x 00:37:03.111 ************************************ 00:37:03.111 END TEST bdev_json_nonarray 00:37:03.111 ************************************ 00:37:03.111 00:50:56 -- bdev/blockdev.sh@787 -- # [[ raid5f == bdev ]] 00:37:03.111 00:50:56 -- bdev/blockdev.sh@794 -- # [[ raid5f == gpt ]] 00:37:03.111 00:50:56 -- bdev/blockdev.sh@798 -- # [[ raid5f == crypto_sw ]] 00:37:03.111 00:50:56 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:37:03.111 00:50:56 -- bdev/blockdev.sh@811 -- # cleanup 00:37:03.111 00:50:56 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:37:03.111 00:50:56 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:37:03.111 00:50:56 -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:37:03.111 00:50:56 -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:37:03.111 00:50:56 -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:37:03.111 00:50:56 -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:37:03.111 00:37:03.111 real 0m54.987s 00:37:03.111 user 1m14.784s 00:37:03.111 sys 0m5.627s 00:37:03.111 00:50:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:37:03.111 00:50:56 -- common/autotest_common.sh@10 -- # set +x 00:37:03.111 ************************************ 00:37:03.111 END TEST blockdev_raid5f 00:37:03.111 ************************************ 00:37:03.111 00:50:56 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:37:03.111 00:50:56 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:37:03.111 00:50:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:37:03.111 00:50:56 -- common/autotest_common.sh@10 -- # set +x 00:37:03.111 00:50:56 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:37:03.111 00:50:56 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:37:03.111 00:50:56 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:37:03.111 00:50:56 -- common/autotest_common.sh@10 -- # set +x 00:37:05.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:37:05.267 Waiting for block devices as requested 00:37:05.267 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:05.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:37:05.834 Cleaning 00:37:05.834 Removing: /var/run/dpdk/spdk0/config 00:37:05.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:05.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:05.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:05.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:05.834 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:05.834 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:05.834 Removing: /dev/shm/spdk_tgt_trace.pid110354 00:37:05.834 Removing: /var/run/dpdk/spdk0 00:37:05.834 Removing: /var/run/dpdk/spdk_pid110062 00:37:05.834 Removing: /var/run/dpdk/spdk_pid110354 00:37:05.834 Removing: /var/run/dpdk/spdk_pid110643 00:37:05.834 Removing: /var/run/dpdk/spdk_pid110776 00:37:05.834 Removing: /var/run/dpdk/spdk_pid110840 00:37:05.834 Removing: /var/run/dpdk/spdk_pid110999 00:37:05.834 Removing: /var/run/dpdk/spdk_pid111024 00:37:05.834 Removing: /var/run/dpdk/spdk_pid111209 00:37:05.834 Removing: /var/run/dpdk/spdk_pid111493 00:37:05.834 Removing: /var/run/dpdk/spdk_pid111698 00:37:05.834 Removing: /var/run/dpdk/spdk_pid111825 00:37:05.834 Removing: /var/run/dpdk/spdk_pid111949 00:37:05.834 Removing: /var/run/dpdk/spdk_pid112080 00:37:05.834 Removing: /var/run/dpdk/spdk_pid112212 00:37:05.834 Removing: /var/run/dpdk/spdk_pid112269 00:37:05.834 Removing: /var/run/dpdk/spdk_pid112323 00:37:05.834 Removing: /var/run/dpdk/spdk_pid112408 00:37:05.834 Removing: /var/run/dpdk/spdk_pid112547 00:37:05.834 Removing: /var/run/dpdk/spdk_pid113118 00:37:05.834 Removing: /var/run/dpdk/spdk_pid113206 00:37:05.834 Removing: /var/run/dpdk/spdk_pid113309 00:37:05.834 Removing: /var/run/dpdk/spdk_pid113329 00:37:05.834 Removing: /var/run/dpdk/spdk_pid113510 00:37:05.834 Removing: /var/run/dpdk/spdk_pid113536 00:37:05.834 Removing: /var/run/dpdk/spdk_pid113716 00:37:05.834 Removing: /var/run/dpdk/spdk_pid113751 00:37:05.834 Removing: /var/run/dpdk/spdk_pid113836 00:37:05.834 Removing: /var/run/dpdk/spdk_pid113859 00:37:05.834 Removing: /var/run/dpdk/spdk_pid113941 00:37:05.834 Removing: /var/run/dpdk/spdk_pid113969 00:37:05.834 Removing: /var/run/dpdk/spdk_pid114200 00:37:05.834 Removing: /var/run/dpdk/spdk_pid114254 00:37:05.834 Removing: /var/run/dpdk/spdk_pid114307 00:37:05.834 Removing: /var/run/dpdk/spdk_pid114398 00:37:05.834 Removing: /var/run/dpdk/spdk_pid114502 00:37:05.834 Removing: /var/run/dpdk/spdk_pid114557 00:37:05.834 Removing: /var/run/dpdk/spdk_pid114666 00:37:06.093 Removing: /var/run/dpdk/spdk_pid114728 00:37:06.093 Removing: /var/run/dpdk/spdk_pid114797 00:37:06.093 Removing: /var/run/dpdk/spdk_pid114852 00:37:06.093 Removing: /var/run/dpdk/spdk_pid114919 00:37:06.093 Removing: /var/run/dpdk/spdk_pid114987 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115050 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115117 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115179 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115241 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115301 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115368 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115423 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115494 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115554 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115621 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115677 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115750 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115815 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115882 00:37:06.093 Removing: /var/run/dpdk/spdk_pid115949 00:37:06.093 Removing: /var/run/dpdk/spdk_pid116048 00:37:06.093 Removing: /var/run/dpdk/spdk_pid116204 00:37:06.093 Removing: /var/run/dpdk/spdk_pid116412 00:37:06.093 Removing: /var/run/dpdk/spdk_pid116527 00:37:06.093 Removing: /var/run/dpdk/spdk_pid116611 00:37:06.093 Removing: /var/run/dpdk/spdk_pid117926 00:37:06.093 Removing: /var/run/dpdk/spdk_pid118169 00:37:06.093 Removing: /var/run/dpdk/spdk_pid118396 00:37:06.093 Removing: /var/run/dpdk/spdk_pid118537 00:37:06.093 Removing: /var/run/dpdk/spdk_pid118698 00:37:06.093 Removing: /var/run/dpdk/spdk_pid118795 00:37:06.093 Removing: /var/run/dpdk/spdk_pid118837 00:37:06.093 Removing: /var/run/dpdk/spdk_pid118879 00:37:06.093 Removing: /var/run/dpdk/spdk_pid119379 00:37:06.093 Removing: /var/run/dpdk/spdk_pid119484 00:37:06.093 Removing: /var/run/dpdk/spdk_pid119613 00:37:06.093 Removing: /var/run/dpdk/spdk_pid119692 00:37:06.093 Removing: /var/run/dpdk/spdk_pid121006 00:37:06.093 Removing: /var/run/dpdk/spdk_pid121958 00:37:06.093 Removing: /var/run/dpdk/spdk_pid122910 00:37:06.093 Removing: /var/run/dpdk/spdk_pid124097 00:37:06.093 Removing: /var/run/dpdk/spdk_pid125241 00:37:06.093 Removing: /var/run/dpdk/spdk_pid126366 00:37:06.093 Removing: /var/run/dpdk/spdk_pid127951 00:37:06.093 Removing: /var/run/dpdk/spdk_pid129222 00:37:06.093 Removing: /var/run/dpdk/spdk_pid130499 00:37:06.093 Removing: /var/run/dpdk/spdk_pid131205 00:37:06.093 Removing: /var/run/dpdk/spdk_pid131758 00:37:06.093 Removing: /var/run/dpdk/spdk_pid132421 00:37:06.093 Removing: /var/run/dpdk/spdk_pid132924 00:37:06.093 Removing: /var/run/dpdk/spdk_pid133517 00:37:06.093 Removing: /var/run/dpdk/spdk_pid134084 00:37:06.093 Removing: /var/run/dpdk/spdk_pid134779 00:37:06.093 Removing: /var/run/dpdk/spdk_pid135307 00:37:06.093 Removing: /var/run/dpdk/spdk_pid136801 00:37:06.093 Removing: /var/run/dpdk/spdk_pid137430 00:37:06.093 Removing: /var/run/dpdk/spdk_pid137994 00:37:06.093 Removing: /var/run/dpdk/spdk_pid139593 00:37:06.093 Removing: /var/run/dpdk/spdk_pid140304 00:37:06.093 Removing: /var/run/dpdk/spdk_pid140944 00:37:06.093 Removing: /var/run/dpdk/spdk_pid141749 00:37:06.093 Removing: /var/run/dpdk/spdk_pid141810 00:37:06.093 Removing: /var/run/dpdk/spdk_pid141868 00:37:06.093 Removing: /var/run/dpdk/spdk_pid141938 00:37:06.093 Removing: /var/run/dpdk/spdk_pid142091 00:37:06.093 Removing: /var/run/dpdk/spdk_pid142256 00:37:06.093 Removing: /var/run/dpdk/spdk_pid142495 00:37:06.093 Removing: /var/run/dpdk/spdk_pid142809 00:37:06.093 Removing: /var/run/dpdk/spdk_pid142838 00:37:06.093 Removing: /var/run/dpdk/spdk_pid142904 00:37:06.093 Removing: /var/run/dpdk/spdk_pid142936 00:37:06.093 Removing: /var/run/dpdk/spdk_pid142966 00:37:06.093 Removing: /var/run/dpdk/spdk_pid143009 00:37:06.093 Removing: /var/run/dpdk/spdk_pid143038 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143070 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143111 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143150 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143183 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143215 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143255 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143284 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143327 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143355 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143390 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143429 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143461 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143493 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143554 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143590 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143636 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143734 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143789 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143824 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143879 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143906 00:37:06.388 Removing: /var/run/dpdk/spdk_pid143940 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144015 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144043 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144095 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144135 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144165 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144190 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144219 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144255 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144283 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144310 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144371 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144428 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144461 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144519 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144552 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144579 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144647 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144688 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144740 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144773 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144804 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144834 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144865 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144896 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144926 00:37:06.388 Removing: /var/run/dpdk/spdk_pid144955 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145069 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145183 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145367 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145404 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145467 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145542 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145587 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145621 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145662 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145707 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145750 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145847 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145923 00:37:06.388 Removing: /var/run/dpdk/spdk_pid145992 00:37:06.388 Removing: /var/run/dpdk/spdk_pid146309 00:37:06.388 Removing: /var/run/dpdk/spdk_pid146471 00:37:06.388 Removing: /var/run/dpdk/spdk_pid146530 00:37:06.388 Removing: /var/run/dpdk/spdk_pid146638 00:37:06.388 Removing: /var/run/dpdk/spdk_pid146743 00:37:06.388 Removing: /var/run/dpdk/spdk_pid146800 00:37:06.388 Removing: /var/run/dpdk/spdk_pid147085 00:37:06.388 Removing: /var/run/dpdk/spdk_pid147191 00:37:06.388 Removing: /var/run/dpdk/spdk_pid147310 00:37:06.388 Removing: /var/run/dpdk/spdk_pid147377 00:37:06.388 Removing: /var/run/dpdk/spdk_pid147413 00:37:06.388 Removing: /var/run/dpdk/spdk_pid147512 00:37:06.388 Removing: /var/run/dpdk/spdk_pid147967 00:37:06.388 Removing: /var/run/dpdk/spdk_pid148030 00:37:06.388 Removing: /var/run/dpdk/spdk_pid148374 00:37:06.388 Removing: /var/run/dpdk/spdk_pid148485 00:37:06.388 Removing: /var/run/dpdk/spdk_pid148604 00:37:06.388 Removing: /var/run/dpdk/spdk_pid148674 00:37:06.388 Removing: /var/run/dpdk/spdk_pid148717 00:37:06.388 Removing: /var/run/dpdk/spdk_pid148753 00:37:06.388 Removing: /var/run/dpdk/spdk_pid150178 00:37:06.388 Removing: /var/run/dpdk/spdk_pid150330 00:37:06.388 Removing: /var/run/dpdk/spdk_pid150335 00:37:06.646 Removing: /var/run/dpdk/spdk_pid150361 00:37:06.646 Removing: /var/run/dpdk/spdk_pid150862 00:37:06.646 Removing: /var/run/dpdk/spdk_pid150990 00:37:06.646 Removing: /var/run/dpdk/spdk_pid151163 00:37:06.646 Removing: /var/run/dpdk/spdk_pid151239 00:37:06.646 Removing: /var/run/dpdk/spdk_pid151313 00:37:06.646 Removing: /var/run/dpdk/spdk_pid151626 00:37:06.646 Removing: /var/run/dpdk/spdk_pid151829 00:37:06.646 Removing: /var/run/dpdk/spdk_pid151942 00:37:06.646 Removing: /var/run/dpdk/spdk_pid152062 00:37:06.646 Removing: /var/run/dpdk/spdk_pid152137 00:37:06.646 Removing: /var/run/dpdk/spdk_pid152181 00:37:06.646 Clean 00:37:06.646 00:51:00 -- common/autotest_common.sh@1437 -- # return 0 00:37:06.646 00:51:00 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:37:06.646 00:51:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:37:06.646 00:51:00 -- common/autotest_common.sh@10 -- # set +x 00:37:06.646 00:51:00 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:37:06.646 00:51:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:37:06.646 00:51:00 -- common/autotest_common.sh@10 -- # set +x 00:37:06.903 00:51:00 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:06.903 00:51:00 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:37:06.903 00:51:00 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:37:06.903 00:51:00 -- spdk/autotest.sh@389 -- # hash lcov 00:37:06.903 00:51:00 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:06.903 00:51:00 -- spdk/autotest.sh@391 -- # hostname 00:37:06.903 00:51:00 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:37:07.161 geninfo: WARNING: invalid characters removed from testname! 00:37:54.073 00:51:47 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:59.341 00:51:52 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:02.624 00:51:55 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:05.908 00:51:59 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:08.439 00:52:02 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:11.723 00:52:05 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:14.277 00:52:08 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:14.536 00:52:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:14.536 00:52:08 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:14.536 00:52:08 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:14.536 00:52:08 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:14.536 00:52:08 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:14.536 00:52:08 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:14.536 00:52:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:14.536 00:52:08 -- paths/export.sh@5 -- $ export PATH 00:38:14.536 00:52:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:14.536 00:52:08 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:38:14.536 00:52:08 -- common/autobuild_common.sh@435 -- $ date +%s 00:38:14.536 00:52:08 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713919928.XXXXXX 00:38:14.536 00:52:08 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713919928.Tn46wN 00:38:14.536 00:52:08 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:38:14.536 00:52:08 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:38:14.536 00:52:08 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:38:14.536 00:52:08 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:38:14.536 00:52:08 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:38:14.536 00:52:08 -- common/autobuild_common.sh@451 -- $ get_config_params 00:38:14.536 00:52:08 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:38:14.536 00:52:08 -- common/autotest_common.sh@10 -- $ set +x 00:38:14.536 00:52:08 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:38:14.536 00:52:08 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:38:14.536 00:52:08 -- pm/common@17 -- $ local monitor 00:38:14.536 00:52:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.536 00:52:08 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=153710 00:38:14.536 00:52:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.536 00:52:08 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=153711 00:38:14.536 00:52:08 -- pm/common@26 -- $ sleep 1 00:38:14.536 00:52:08 -- pm/common@21 -- $ date +%s 00:38:14.536 00:52:08 -- pm/common@21 -- $ date +%s 00:38:14.536 00:52:08 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713919928 00:38:14.536 00:52:08 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713919928 00:38:14.536 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713919928_collect-vmstat.pm.log 00:38:14.536 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713919928_collect-cpu-load.pm.log 00:38:15.470 00:52:09 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:38:15.470 00:52:09 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:38:15.470 00:52:09 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:38:15.470 00:52:09 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:15.470 00:52:09 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:15.470 00:52:09 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:15.470 00:52:09 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:15.470 00:52:09 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:15.470 00:52:09 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:15.470 00:52:09 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:15.470 00:52:09 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:15.470 00:52:09 -- pm/common@30 -- $ signal_monitor_resources TERM 00:38:15.470 00:52:09 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:38:15.470 00:52:09 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:15.470 00:52:09 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:38:15.470 00:52:09 -- pm/common@45 -- $ pid=153718 00:38:15.470 00:52:09 -- pm/common@52 -- $ sudo kill -TERM 153718 00:38:15.470 00:52:09 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:15.470 00:52:09 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:38:15.728 00:52:09 -- pm/common@45 -- $ pid=153717 00:38:15.728 00:52:09 -- pm/common@52 -- $ sudo kill -TERM 153717 00:38:15.728 + [[ -n 2094 ]] 00:38:15.728 + sudo kill 2094 00:38:15.738 [Pipeline] } 00:38:15.757 [Pipeline] // timeout 00:38:15.763 [Pipeline] } 00:38:15.780 [Pipeline] // stage 00:38:15.785 [Pipeline] } 00:38:15.802 [Pipeline] // catchError 00:38:15.811 [Pipeline] stage 00:38:15.813 [Pipeline] { (Stop VM) 00:38:15.827 [Pipeline] sh 00:38:16.104 + vagrant halt 00:38:20.337 ==> default: Halting domain... 00:38:30.348 [Pipeline] sh 00:38:30.626 + vagrant destroy -f 00:38:33.928 ==> default: Removing domain... 00:38:33.943 [Pipeline] sh 00:38:34.222 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:38:34.232 [Pipeline] } 00:38:34.252 [Pipeline] // stage 00:38:34.258 [Pipeline] } 00:38:34.276 [Pipeline] // dir 00:38:34.282 [Pipeline] } 00:38:34.300 [Pipeline] // wrap 00:38:34.307 [Pipeline] } 00:38:34.324 [Pipeline] // catchError 00:38:34.334 [Pipeline] stage 00:38:34.335 [Pipeline] { (Epilogue) 00:38:34.353 [Pipeline] sh 00:38:34.635 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:52.730 [Pipeline] catchError 00:38:52.732 [Pipeline] { 00:38:52.748 [Pipeline] sh 00:38:53.031 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:53.290 Artifacts sizes are good 00:38:53.299 [Pipeline] } 00:38:53.315 [Pipeline] // catchError 00:38:53.328 [Pipeline] archiveArtifacts 00:38:53.336 Archiving artifacts 00:38:53.720 [Pipeline] cleanWs 00:38:53.730 [WS-CLEANUP] Deleting project workspace... 00:38:53.730 [WS-CLEANUP] Deferred wipeout is used... 00:38:53.736 [WS-CLEANUP] done 00:38:53.738 [Pipeline] } 00:38:53.760 [Pipeline] // stage 00:38:53.766 [Pipeline] } 00:38:53.782 [Pipeline] // node 00:38:53.788 [Pipeline] End of Pipeline 00:38:53.825 Finished: SUCCESS